Deliverable Number: 121C.102
Contract Number: 75Q80120D00024
June 15, 2022
Authors
Westat
Westat Reference Number: 2-7-634
Draft
Submitted to:
Agency for Healthcare Research and Quality
Center for Financing, Access, and Cost Trends
560 Fishers Lane
Rockville, MD 20850
Submitted by:
Westat
An Employee-Owned Research Corporation®
1600 Research Boulevard
Rockville, Maryland 20850-3129
(301) 251-1500
Introduction
1 Sample
1.1 Sample Composition
1.2 Sample Delivery and Processing
2 Instrument and Materials Design
2.1 Introduction
2.2 Changes to CAPI Instrument for 2021
2.3 Testing of the Questionnaire and Interviewer Management System
2.4 Changes to Materials and Procedures for 2021
3 Recruiting and Training
3.1 Field Interviewer Recruiting for 2021
3.2 2021 Interviewer Training
3.2.1 Experienced Interviewer Training
3.2.2 Continuing Education for All Interviewers
4 Data Collection
4.1 Data Collection Procedures
4.2 Data Collection Results: Interviewing
4.3 Data Collection Results: Authorization Form Signing Rates
4.4 Data Collection Results: Self-Administered Questionnaire (SAQ), Diabetes Care Supplement (DCS), and Collection Rates
4.5 Social Determinants of Health Self-Administered Questionnaire (SDOH SAQ): Methods and Results
4.6 Quality Control
4.7 Security Incidents
5 Home Office Support of Field Activities
5.1 Preparation for Field Activities
5.2 Support During Data Collection
6 Data Processing and Data Delivery
6.1 Processing to Support Data Delivery
6.1.1 Schedules for Data Delivery
6.1.2 Data Quality Control System
6.1.3 Transformation
6.1.4 TeleForm/Data Editing of Scanned Forms
6.1.5 Coding
6.2 Data Delivery
6.2.1 Variable Construction
6.2.2 File Deliveries
Appendix A Comprehensive Tables – Household Survey
Table 1-1 Initial MEPS sample size (RUs) and number of NHIS PSUs, all Panels
Table 1-2 Data collection periods and starting RU-level sample sizes, spring 2017 through fall 2021
Table 1-3 Percentage of NHIS households with partially completed interviews in Panels 4 to 26
Table 1-4 Distribution of Panel 26 sampled RUs by sample domain
Table 2-1 Supplements to the CAPI core questionnaire (including hard-copy materials) for 2021
Table 3-1 Staffing for spring field period, 2017�2021
Table 3-2 Spring attrition rate among new and experienced interviewers, 2017�2021
Table 3-3 Fall attrition rate among new and experienced interviewers, 2017�2021
Table 3-4 Annual attrition rate among new and experienced interviewers, 2017�2021
Table 4-1 Data collection schedule and number of weeks per round of data collection, 2021
Table 4-2 Case potential categories for classifying and prioritizing case work, spring 2021
Table 4-3 MEPS-HC data collection results, Panels 20 through 26
Table 4-4 Response rates by data collection year, 2012-2021
Table 4-5 Summary of MEPS Round 1 response and nonresponse, 2016-2021 panels
Table 4-6 Summary of MEPS Round 1 response, 2016-2021 panels, by NHIS completion status
Table 4-7 Summary of MEPS Panel 26 Round 1 response rates, by sample domain by NHIS completion status
Table 4-8 Summary of MEPS Round 1 results for RUs who ever refused, Panels 20-26
Table 4-9 Summary of MEPS Round 1 results for RUs who were ever traced, Panels 20-26
Table 4-10 Interview timing comparison, Panels 20 through 26 (mean minutes per interview, single-session interviews)
Table 4-11 Mean contact attempts by NHIS completion status, Round 1 of Panels 24-26
Table 4-12 Signing rates for medical provider authorization forms for Panels 19 through 26
Table 4-13 Signing rates for pharmacy authorization forms for Panels 19 through 26
Table 4-14 Results of Self-Administered Questionnaire (SAQ) collection for Panels 20 through 26
Table 4-15 Results of Diabetes Care Supplement (DCS) collection for Panels 18 through 25
Table 4-16 Impact of fall follow-up on total completed SDOH SAQs
Table 5-1 Number and percent of respondents who called the respondent information line, 2017-2021
Table 5-2 Calls to the respondent information line, 2020 and 2021
Table 6-1 2021 cases with comments or data check issues
Table 6-2 Total number of comments by category
Table A-1 Data collection periods and starting RU-level sample sizes, all Panels
Table A-2 MEPS household survey data collection results, all Panels
Table A-3 Response rates by data collection year
Table A-4 Summary of MEPS Round 1 response and non-response
Table A-5 Summary of Round 1 response by NHIS completion status
Table A-6 Summary of MEPS Round 1 results for all RUs who ever refused
Table A-7 Summary of MEPS Round 1 results for RUs who were ever traced, Panels 15-26
Table A-8 Interview timing comparison (mean minutes per interview, single-session interviews)
Table A-9 Mean contact attempts by NHIS completion status, Round 1
Table A-10 Signing rates for medical provider authorization forms
Table A-11 Signing rates for pharmacy authorization forms
Table A-12 Results of Self-Administered Questionnaire (SAQ) collection
Table A-13 Results of Diabetes Care Supplement (DCS) collection*
Table A-14 Results of patient profile collection
Table A-15 Calls to respondent information line
Table A-16 Files delivered during 2021
Figure 4-1 SDOH contact mode determination flowchart
Figure 4-2 Communication protocol for SDOH contact modes
Figure 4-3 Response rate by contact mode and interview number
Figure 6-1 Blaise to Dex transformation
The Household Component of the Medical Expenditure Panel Survey (MEPS-HC, Contract 290- 2016-00004I, awarded July 1, 2016, and Contract 75Q80120D00024, awarded July 13, 2020) is the central component of the long-term research effort sponsored by the Agency for Healthcare Research and Quality (AHRQ) to provide timely and accurate data on access to, use of, and payments for healthcare services by the U.S. civilian non-institutionalized population. The project has been in operation since 1996, each year producing a series of annual estimates of health insurance coverage, healthcare utilization, and healthcare expenditures. This report documents the principal design, training, data collection, and data processing activities of the MEPS-HC for survey year 2021.
Data are collected for the MEPS-HC through a series of overlapping household Panels. Each year a new Panel is enrolled for a series of five in-person interviews conducted over a 2½-year period.
Panels 23 and 24, however, have been extended to nine interviews conducted over 4½ years, as described in the section below on changes due to COVID-19. This report describes work performed for all of the Panels active during calendar year 2021. Data collection operations in 2021 were for Panel 23, Rounds 7 and 8; Panel 24, Rounds 5 and 6; Panel 25, Rounds 3 and 4; and Panel 26, Rounds 1 and 2. Data processing activity focused on delivery of full year utilization and expenditure files for calendar year 2019.
The report touches lightly on procedures and operations that remained unchanged from prior years, focusing primarily on results of the 2021 operations and features of the project that were new, changed, or enhanced for 2021. Tables in the body of the text highlight 2021 results, with limited comparison to prior years. A set of tables showing data collection results over the history of the project is included in the Appendix.
Chapter 1 of the report describes the 2021 sample and activities associated with preparing the sample for fielding. Chapters 2 through 5 discuss activities associated with the data collection for 2021: updates to the survey questionnaire and field procedures; field staff recruiting and training; data collection operations and results; and home office support of field activities. Chapter 6 describes data processing and data delivery activities.
Changes Due to COVID-19
All MEPS Household Component (MEPS-HC) face-to-face interviewing ceased on March 17, 2020, due to the impact of COVID-19 on American life. Data collection switched to the telephone mode, and in 2021 a mix of in-person and telephone interviewing was used, depending on the level of the COVID-19 pandemic. In-person data collection was impacted by the COVID-19 much more in the spring rounds than in the fall rounds. In the spring, Round 1 had 71.7 percent of interviews conducted by telephone, Round 3 had 96.4 percent, and Round 5 had 98.2 percent. In the fall, 33.8 percent of Round 2 and 39.7 percent of Round 4 were conducted by telephone. (In a typical year, around 5 percent to 8 percent of interviews are conducted by telephone, mostly student interviews and Round 5 interviews.)
MEPS-HC continued several modifications to project systems, processes, and procedures begun in 2020 to respond to the pandemic. Please see the 2020 methodology report for additional details:
Extension of Panels 23 and 24. Anticipating potential negative impacts of the COVID-19 pandemic on response rates and the number of households that would be included in 2020 and 2021 data and beyond, a decision was made to extend Panel 23 and Panel 24 through nine rounds. The extended Panel rounds have been conducted primarily by telephone, with limited in-person interviewing conducted when safe for hard-to-reach or hearing-impaired respondents.
Enhancing the Quality of Telephone Interviewing. MEPS provided respondents with a website for show cards and other documents that interviewers would normally present in-person on paper to respondents. Interviewers requested that respondents refer to the online show cards in answering each item or read the show cards out loud, mirroring the in-person protocol. Interviewers received headsets and telephone interviewing protocols and training, including data quality protocols specific to each round of data collection.
Maximizing Response Rates. The project developed and sent COVID-19-specific letters and postcards tailored for each Panel and round to notify households that the study was ongoing and to expect telephone outreach. The project also added efforts to increase return of hard-copy materials, particularly authorization forms (AFs), including a formal protocol for reminder calls, re-mailing unreturned AFs, and a modified in-person protocol for the retrieval of completed AFs.
CAPI Instrument Changes. To more fully capture telehealth events, the project added an event type for telehealth events, broadened the text in the provider probes section to prompt respondents to include telehealth events, and adjusted the wording in other corresponding sections to accommodate care received via telehealth. Wording exclusive to in-person visits (i.e. “seen,” “in person”) was updated to allow for the inclusion of telehealth visits. A section was added to capture delays in care due to the COVID-19 pandemic, referring back to the start of the pandemic for the spring 2021 interview and back to the date of the spring interview for the fall 2021 interview. A question was added for the fall interview about whether household members had received a COVID-19 vaccine.
In-Person Data Collection. COVID-19 in-person mitigation protocols were developed and distributed to interviewers who were authorized to conduct in-person interviewing. Interviewers received training on use of personal protective equipment (PPE) and COVID-19 mitigation. Using Westat’s COVID Dashboard for Household Surveys, MEPS monitored conditions for safe in-person interviewing and AF collection. In-person efforts led to both in-person interviews and enhanced the ability to make telephone interview appointments.
Each year a new, nationally representative sample for the Medical Expenditure Panel Survey Household Component (MEPS-HC) is drawn from among households responding to the previous year’s National Health Interview Survey (NHIS). Up until 2020, households in a new Panel participated in a series of five interviews that collect data covering two full calendar years. For each calendar year the sample respondents from two Panels�one completing its first year in the study (Round 3) and one completing its second year (Round 5)�have been combined for analysis purposes, resulting in a series of annual estimation files. Beginning in 2020, with the onset of the COVID-19 pandemic, there were concerns of declining response rates as well as challenges in recruiting respondents by telephone for Panel 25. These concerns continued for Panel 26 in 2021. The extension of Panel 23 for a third year of data collection has helped to maintain the ongoing sample.
The sample for Panel 26 was selected from among households responding to the NHIS in the preceding year where the NHIS sample was based on the NHIS sample design initially implemented in 2016 (as were Panels 22-25). Specifically, the MEPS household sample was randomly selected from among those that participated in the NHIS during the first three quarters of 2020 and who had been assigned to NHIS Panels 1 and 3, the NHIS Panels designated for MEPS. However, due to the pandemic, households from Panels 2 and 4 were also selected for the MEPS sample.
This chapter describes the 2021 MEPS sample drawn from 2020 NHIS responding households as well as steps taken to prepare the new sample for fielding.
Table 1-1 shows the starting sample sizes in terms of the number of reporting units (RUs) for all MEPS Panels through Panel 26 and the number of MEPS primary sampling units (PSUs) from which each Panel was drawn. Note that the change in the number of PSUs for Panel 12 reflects the redesign of the NHIS sample implemented in 2006 (thus affecting MEPS in 2007), following the 2000 decennial census. The number of PSUs for Panel 26 is based on the number of PSUs associated with MEPS after the 2016 NHIS sample redesign, the fifth such MEPS Panel under this design. The reduction in the number of PSUs after Panel 22 stemmed from further modifications to the NHIS design. The MEPS sample units presented are RUs, each of which represents a set of related persons living together within the same NHIS-responding household selected for MEPS participation. Related members of the NHIS households sampled for MEPS who move as a unit during the MEPS data collection period (as well as separate individuals) form new RUs for interviewing purposes. Each new RU is followed over the course of the five MEPS data collection rounds and interviewed at their new address.
Panel | Initial sample size (RUs)* | MEPS PSUs* |
---|---|---|
1 | 10,799 | 195 |
2 | 6,461 | 195 |
3 | 5,410 | 195 |
4 | 7,103 | 100 |
5 | 5,533 | 100 |
6 | 11,026 | 195 |
7 | 8,339 | 195 |
8 | 8,706 | 195 |
9 | 8,939 | 195 |
10 | 8,748 | 195 |
11 | 9,654 | 195 |
12 | 7,467 | 183 |
13 | 9,939 | 183 |
14 | 9,899 | 183 |
15 | 8,968 | 183 |
16 | 10,417 | 183 |
17 | 9,931 | 183 |
18 | 9,950 | 183 |
19 | 9,970 | 183 |
20 | 10,854 | 183 |
21 | 9,851 | 183 |
22 | 9,835 | 168 |
23 | 9,960 | 143 |
24 | 9,976 | 139 |
25 | 10,008 | 139 |
26 | 9,674 | 150 |
* RUs: Reporting units; PSUs: Primary sampling units.
MEPS data collection is conducted in two main fielding periods each year. Typically, during the January-June period, Round 1 of the new Panel and Rounds 3 and 5 of the two continuing Panels are fielded, with the Panel in Round 5 retiring at mid-year. Normally, during the July-December period, Round 2 of the new Panel and Round 4 of the remaining continuing Panel are fielded.
However, with a third Panel added for the first time in 2020, a Round 6 for Panel 23 was also fielded in the fall data collection period. It should be noted that, because Round 5 of Panel 23 collected MEPS data only through December 31, 2019, the reference period for Round 6 covered from the date of interview back to January 1, 2020. As a result, the reference periods for Round 6 were typically much longer than the usual reference period for a fall interview. Table 1-2 summarizes the combined workload for the January-June and July-December periods from spring 2017 through fall 2021.
Over the years shown in Table 1-2, the combined spring and fall workload has ranged from a low of 37,305 in 2019 to a high of 44,466 in 2021. Typically, the interviewing workload during the spring field period, when three Panels are active, is substantially larger than during the fall, when there are only two. In 2021, there were four active Panels in both the spring and fall field periods. The spring field period still had more cases, with 25,126 cases fielded, an increase from the low number of cases fielded in spring 2020, while the fall workload had 19,340 RUs, the highest of the 5 years shown.
Data collection period | RU-level sample size* |
---|---|
January – June 2017 | 24,774 |
Panel 20 Round 5 | 7,611 |
Panel 21 Round 3 | 7,328 |
Panel 22 Round 1 | 9,835 |
July – December 2017 | 14,395 |
Panel 21 Round 4 | 7,025 |
Panel 22 Round 2 | 7,370 |
January – June 2018 | 23,768 |
Panel 21 Round 5 | 6,899 |
Panel 22 Round 3 | 7,023 |
Panel 23 Round 1 | 9,846 |
July – December 2018 | 14,123 |
Panel 22 Round 4 | 6,788 |
Panel 23 Round 2 | 7,335 |
January – June 2019 | 23,458 |
Panel 22 Round 5 | 6,653 |
Panel 23 Round 3 | 6,941 |
Panel 24 Round 1 | 9,864 |
July – December 2019 | 13,847 |
Panel 23 Round 4 | 6,679 |
Panel 24 Round 2 | 7,168 |
January – June 2020 | 23,122 |
Panel 23 Round 5 | 6,488 |
Panel 24 Round 3 | 6,753 |
Panel 25 Round 1 | 9,881 |
July – December 2020 | 18,480 |
Panel 23 Round 6 | 6,373 |
Panel 24 Round 4 | 6,278 |
Panel 25 Round 2 | 5,829 |
January-June 2021 | 25,126 |
Panel 23 Round 7 | 5,096 |
Panel 24 Round 5 | 5,426 |
Panel 25 Round 3 | 5,094 |
Panel 26 Round 1 | 9,510 |
July-December 2021 | 19,340 |
Panel 23 Round 8 | 4,492 |
Panel 24 Round 6 | 4,753 |
Panel 25 Round 4 | 4,222 |
Panel 26 Round 2 | 5,873 |
* RU-level sample size for this table derived from field management system counts and operational reports detailing fielded sample.
Each new MEPS Panel includes some oversampling of population groups of particular analytic interest. Since 2010 (Panel 15), the set of sample domains has included oversamples of Asians, Blacks, and Hispanics. All households set aside in the NHIS for MEPS that have at least one household member in any of these three categories (Asian, Black, or Hispanic) are included in the MEPS sample with certainty. “White and other race” households have been partitioned into two sample domains and subsampled at varying rates across the years. These domains reflect whether an NHIS responding household characterized as “White or other race” provided “complete” information at the household level for the NHIS or if only “partially complete” information was provided.
As background, the partitioning of the “White, Other” domain into these two domains began in 2011 (Panel 16). The partial completes were sampled at a lower rate than the full completes in order to lessen the impact on the field effort resulting from the difficulty of gaining the cooperation of these households. The last two columns in Table 1-3 show the subsampling rates for the two groups since Panel 16. The partial completes in the “White, Other” domain have been subsampled at rates ranging from a low of 40 percent (Panel 17) to a high of 53 percent (Panel 20).
Panel | Percentage with partially completed interviews | Subsampling rate for NHIS completes in “White, other” domain | Subsampling rate for partial completes in “White, other” domain |
---|---|---|---|
4 | 21 | ||
5 | 24 | ||
6 | 22 | ||
7 | 17 | ||
8 | 20 | ||
9 | 19 | ||
10 | 16 | ||
11 | 23 | ||
12 | 19 | ||
13 | 25 | ||
14 | 26 | ||
15 | 21 | ||
16 | 25 | 79 | 46 |
17 | 19 | 51 | 40 |
18 | 22 | 63 | 43 |
19 | 18 | 66 | 42 |
20 | 19 | 84 | 53 |
21 | 22 | 81 | 49 |
22 | 19 | 77 | 49 |
23 | 20 | ||
24 | 16 | ||
25 | 11 | ||
26 | 15 |
* The figures in the second column of the table are the proportion of partial completes in the total delivered sample, after subsampling. The figures in the third and fourth columns are subsampling rates applied to the two White/Other subdomains in Panels 16 through 22.
The Panel 26 sample distribution by domain, as detailed in Table 1-4, was impacted by the pandemic and overall NHIS response rates. This reduces our ability to oversample households in minority domains, altering the proportional composition from previous years. This has implications for weighting and response rates as households in minority domains exhibit higher levels of response in comparison to the other group.
Sample domain | Number | Percent |
---|---|---|
Asian | 623 | 6.44 |
Black | 1,443 | 14.92 |
Hispanic | 1,146 | 11.85 |
White, other | 5,632 | 58.22 |
NHIS complete | 830 | 8.58 |
NHIS partial complete | 623 | 6.44 |
Total | 9,674 |
The 2021 MEPS sample was received from AHRQ and NCHS in three deliveries. The first delivery, containing households sampled from the first quarter of the 2020 NHIS, was received on October 2, 2020. MEPS did not receive a sample delivery from the 2020 NHIS second quarter due to the pandemic. Households selected from the third quarter of the NHIS were delivered on December 2, 2020. To achieve a sample size comparable to previous years, households were selected from Panels 2 and 4, Quarter 1 of the 2020 NHIS. These Panels are not normally incorporated for MEPS use, and these households were added later into the field period on February 9, 2021, as a supplemental third wave.
The October delivery of the first majority of the new sample is instrumental to the project’s schedule for launching interviewing each year in early January. The partial file gives insight into the demographic and geographic distribution of the households in the new Panel. This information, when combined with information on older Panels continuing in the new year, guides project decisions on the number and location of new interviewers to recruit.
Upon receipt of the first portion of the 2021 sample, project staff also reviewed the NHIS sample file formats to identify any new variables or values and to make any necessary changes to the project programs that use the sample file information. Following this initial review, staff proceeded with the standard processing through which the NHIS households are reconfigured to conform to MEPS reporting unit definitions and prepared the files needed for advance mailouts and interviewer assignments. The early sample delivery also allows time for checking and updating NHIS addresses to improve the quality of the initial mailouts and to identify households that have moved since the NHIS interview.
Each year, the project makes a number of changes to the instrument used to collect MEPS-HC data, as well as to the field procedures followed by the interviewers who collect the data. The notable changes made for 2021 are detailed in this chapter.
The MEPS-HC computer-assisted personal interviewing (CAPI) instrument was modernized as part of a technology upgrade launched in spring 2018. For each data collection cycle since then, AHRQ and Westat have worked together to define a set of modifications to the CAPI instrument. Some modifications are new items or new sections, whereas others are updates or fixes to existing items.
In addition to substantive modifications during 2021, the CAPI instrument was “genericized” to accommodate the extended rounds added because of the COVID-19 pandemic. Genericizing was necessary to successfully administer round 6 through 9 interviews planned for 2021, 2022, and 2023, as well as increased flexibility, in case the MEPS Panel design needs to be extended or revised in future cycles. Instead of referring to the round number directly (such as 1, 2, or 3) in CAPI specifications and programming, all Panel rounds were assigned to one of the following four general types:
While genericizing did not change which questions were administered in MEPS interviews, it required significant design, programming, and testing effort.
For 2021, there were a few other global changes across the CAPI instrument:
Section-specific changes for the 2021 data collection period, both spring and fall, are summarized below.
Calendar (CA). In response to feedback from experienced MEPS interviewers, the CA30 Records Grid was revised to add a column for the MEPS Record Keeper. This makes it easier to track which respondents used the MEPS Record Keeper tool and to encourage respondents to start using it if they are not already doing so.
Provider Probes (PP). A new show card (PP-16) was added at the final PP items PP160 and PP320, with similar content as the Records Job Aid. Given that the PP question series already includes numerous show cards, including this content as an additional show card is less burdensome for interviewers and respondents than referring to materials outside the show card binder.
Telehealth (TH). Due to the pandemic, there was a global shift to increased provision of health care via telehealth. To capture this type of care, a new “telehealth” (TH) event type was added in spring 2021. For health care identified as a telehealth event, a new TH utilization section was developed. The TH section closely mirrored the MV and OP utilization sections, although two new items were added specifically for TH events. The first item collects the mode of telehealth visit; asking whether it was via phone, video or some other way (TH10). The second item asks whether the provider (or the place where the provider works) is owned or operated by a hospital (TH60). Additionally, existing PP questions about home health were revised from asking about “care received at home” to “care received from someone who visited your home” to ensure this was not confused with telehealth care. Throughout other instrument sections, wording exclusive to in-person visits (e.g., “seen,” “in person”) was also updated to allow for the inclusion of telehealth visits.
Provider Specialty and Types. In spring 2021, show cards were added for the items in the utilization sections that collect the doctor’s specialty (MV20, OP20, TH30) and type of medical provider seen (MV30, OP30, TH40). This change was made to reduce respondent cognitive burden and help cue recall.
COVID-19 (CV). In response to the COVID-19 pandemic, a new section was added in spring 2021 to collect information about delays in care due to the pandemic, including medical care, dental care, and prescription medicines. This section was continued in fall 2021, with the addition of questions about COVID-19 vaccination.
Health Insurance (HX) and Related Sections. Throughout the Health Insurance section, items that referred to state-specific Medicaid names were modified to additionally refer to “Medicaid.” This included revisions to question texts, fills, and context headers. This change was made to help cue respondents who may not be familiar with the name of their state-specific Medicaid program but are familiar with the Federal term “Medicaid.”
Supplements to the CAPI Instrument
Table 2-1 shows the supplements for the rounds administered in calendar year 2021. The major change for 2021 was the introduction of a new “Social and Health Experiences” self-administered questionnaire (SAQ), known internally as the Social Determinants of Health (SDOH) SAQ. For more information about the SDOH SAQ, please refer to Chapter 4, Section 4.5.
Supplement | Round 1 (Spring 2021) | Rounds 3, 5, 7 (Spring 2021) | Rounds 2, 4, 6, 8 (Fall 2021) |
---|---|---|---|
Child Health | X | ||
Access to Care | X | ||
Income | X | ||
Assets | Round 5 only | ||
Medical Provider Authorization Forms for HS, OP, and ER Events | X | X | X |
Medical Provider Authorization Forms for MV, TH, HH, and IC Events | X | X | |
Pharmacy Authorization Forms | X | X | |
Your Health and Health Opinions (SAQ/PSAQ) | Rounds 2, 4, 6 follow-up | X | |
Diabetes Care Supplement (DCS) | X | ||
Social and Health Experiences Survey (SDOH) | X | X | Rounds 1, 3, 5, 7 follow-up |
Testing for the spring 2021 (Rounds 1/3/5/7) instrument was conducted between September and December 2020. Testing for the fall 2021 (Rounds 2/4/6/8) instrument was conducted between March and June 2021. Since 2018, many of the testing approaches and procedures used for the technical upgrade have been continued or adapted to maintain a comprehensive testing plan that supports the ongoing instrument development schedule.
CAPI instrument development and testing included multiple programming/testing iterations that each lasted several weeks. Testing was conducted by a mix of corporate testers, MEPS project staff, and trained programming staff. Project and systems staff performed all testing in close coordination with the design team. For each of the spring and fall instruments, AHRQ received an alpha delivery and conducted its own testing. The following month, AHRQ received a beta delivery and conducted additional testing.
The testing ensured that CAPI followed the design as intended and assessed whether the layout of the overall screen for a given question, and across questions, consistently met the requirements designed to minimize measurement error. Feature testing thoroughly tested all new features against specifications, including wording, text fills, legal and illegal responses, boundary conditions, and skip patterns. Testers validated every possible variation allowed by the specifications.
Both scripted and free-form testing were used throughout the development and testing process. A full suite of scripted test cases was defined by the design staff and analytic leads at Westat and is updated each cycle. These scripted test cases represent approximately 80 percent of the cases fielded, including common paths through the CAPI instrument across all Panel rounds. The test script suite was executed through alpha and beta for the spring and fall testing cycles.
In contrast, free-form testing focused on design changes in the current instrument build and ensured that any reported instrument bugs had been fixed. Free-form testing was also utilized to ensure the stability of the CAPI data model and to evaluate the stored data in new or unusual situations. Testers routinely pushed array limits, used back-up, changed answers, and used break-off and restart cases to challenge performance boundaries.
Additional testing components, including enhanced integration testing and ad hoc/free-form testing, were also conducted. The enhanced integration testing allowed project staff to check electronic Face Sheet information, test the Interviewer Assignment Sheet (IAS), and make entries into the electronic record of calls and refusal evaluation form. The ad hoc testing component used information derived from actual cases to verify that all management information was brought forward correctly from previous rounds. Using actual case data also allowed staff to check uncommon paths through the MEPS instrument so that specific changes to the questionnaire could be thoroughly tested.
The manuals and the materials for the 2021 field effort were updated as needed to reflect changes to the questionnaire and management systems. Below is a description of the key changes to the materials and procedures.
Instructional Manuals
The field interviewer procedures manual was updated to address changes in field procedures and updates to the Interviewer Management System (IMS). In the fall of 2021, a PDF of the Field Interviewer Procedures Manual was added to the MEPS laptop to give interviewers access to a searchable electronic version of the manual.
Electronic Materials
To help prepare for upcoming interviews, the electronic face sheet in the IMS provides interviewers with information needed to contact their assigned households and familiarize themselves with the composition of the household and relevant details about their prior history with the survey. In 2021, the face sheet was expanded to show information for up to eight rounds. The Policy Booklet section was removed from the face sheet since all follow-up efforts were completed for that task at the end of 2020. A flag to indicate if there was an outstanding SDOH (Social and Health Experiences SAQ) from the previous round to collect was added to the face sheet for the fall 2021 data collection.
The IMS also contains an RU Information module for documenting operational information to help the next round’s interviewer effectively work each case, an RU Contact module for reporting address and telephone number changes identified prior to the CAPI interview, and the Interviewer Assignment Sheet (IAS), which supports follow-up for AFs and SAQs not completed at the time of the interview. A section was added to the IAS in 2021 to display paper Social and Health Experiences SAQ (SDOH SAQ) requests from CAPI.
New to interviewers in 2021 was the deployment of iPhones with mFOS, or the mobile field operating system. The mFOS application gives interviewers access to some of the same information and capabilities available in the IMS. They can view face sheets for their assigned cases, report address and telephone number changes, and enter electronic records of calls (EROCs) on the iPhone. Data from the iPhone and the IMS are synced upon transmission from the laptop. The mobile phones also offer navigation, email, and other administrative applications useful to interviewers. The phone also serves as a hot spot to allow for internet connectivity and data transmission.
Advance Contact and Other Case Materials
All respondent letters, monthly planners, and self-administered questionnaires were updated with the appropriate year references, and the Income Job Aid was updated with 2018 data. Further, the Monthly Planner, MEPS brochure, and the advance mailing envelope were redesigned with a more modern font and elements that provided a more cohesive look for the advance mailings.
The MEPSDocs.org website was repurposed for telephone interviewing, providing respondents with access to respondent cooperation and record keeping materials, and the show cards, in both English and Spanish.
In fall 2021, the AHRQ signature on letters was updated to that of Joel Cohen, Director of AHRQ’s Center for Financing, Access and Cost Trends.
Overview. For spring 2021 data collection, MEPS attempted to recruit approximately 140 new interviewers to join the team of approximately 270 interviews who were active on MEPS at the start of the 2021 data collection in early January. Our goal was to increase the team and to start spring data collection with about 400 interviewers.
To put the recruiting and attrition numbers into perspective, Table 3-1 summarizes the MEPS spring data collection staffing for the period of 2017 � 2021.
Data collection period | Experienced interviewers staffed | New interviewers staffed | Total Interviewers for spring data collection |
---|---|---|---|
Spring 2017 | 359 | 87 | 446 |
Spring 2018 | 345 | 75 | 420 |
Spring 2019 | 325 | 27 | 352 |
Spring 2020 | 269 | 121 | 390 |
Spring 2021 | 272 | 147* | 419 |
Spring 2021 Attrition Staffing - *Note that the total of 143 includes the 36 Interviewers who were not trained until mid-June in order to shore up fall staffing.
Recruiting Goals. Based on a projected sample size of approximately 24,200 RUs across the four Panels to be fielded for spring 2021 and the likely number of experienced MEPS interviewers available at the end of fall 2020 data collection (about 260), including a MEPS travel team of 10 to 12 members, Westat estimated needing to recruit approximately 140 new interviewers for the standard staffing model. The goal was to start data collection with approximately 400 interviewers actively working during the spring 2021 data collection period.
Westat uses the Field Interviewer Recruitment Module (FIRM) software designed to manage the data collector recruiting process. This system works in conjunction with BrassRing, an online application system used to collect, track, and manage applications for all positions at Westat. The BrassRing system collects applications from both external (new to Westat) and internal (current or former Westat field data collectors) applicants.
The main recruiting of new field interviewers for 2021 began in October 2020 and continued into early January 2021. Due to a relatively low number of applicants, MEPS developed a national telephone/ traveling data collector position to expand opportunities to a national audience. The interviewing in spring 2021 was heavily weighted toward telephone interviewing, so this new position seemed worth trying. Because there was a higher level of attrition during the virtual new hire training, it was decided that MEPS would hold an attrition training to supplement the staff available to move forward into fall data collection in July. Recruitment for the attrition training began in early April and ended in early June 2021.
Recruiting Outcomes. During the main recruiting period, 146 candidates accepted job offers, of whom 121 started training and 111 finished. With the addition of these new trainees, the project began 2021 data collection with a total of 383 interviewers. During the attrition training period, the goal was to add 70 more interviewers. However, only 36 candidates accepted job offers and all 36 completed attrition training.
Interviewer Attrition During 2021 Data Collection. During the spring data collection, 62 new interviewers and 33 experienced interviewers were lost to attrition. An additional 30 new interviewers and 27 experienced interviewers were lost during the fall round. Total attrition for the year was 35.4 percent, the highest attrition level MEPS has experienced in the past 5 years. In looking forward to 2022, MEPS plans to expand the interviewing staff so that we may begin data collection with at least 400 interviewers. The breakdown of interviewer attrition is shown in Tables 3-2, 3-3, and 3-4.
Data collection period | New interviewers lost | Experienced interviewers lost | Total interviewers lost | |||
---|---|---|---|---|---|---|
# | % | # | % | # | % | |
Spring 2017 | 18 | 20.7% | 24 | 6.7% | 42 | 9.4% |
Spring 2018 | 26 | 34.7% | 33 | 9.6% | 59 | 14.0% |
Spring 2019 | 8 | 29.6% | 56 | 17.2% | 64 | 18.2% |
Spring 2020 | 39 | 32.2% | 54 | 20.1% | 93 | 23.8% |
Spring 2021 | 64 | 40.8% | 33 | 12.1% | 97 | 22.6% |
Table 3-2 shows the overall attrition rate during the spring data collection period from 2017 through 2021. Note that although the total spring 2021 attrition rate of 22.6 percent was not quite as high as the spring 2020 attrition rate of 23.8 percent, but it was close. This high attrition rate in 2021 seems to have been exacerbated by a virtual new hire training process that made it much easier for a new hire to quit, as evidenced by a 40.8 percent spring new hire attrition rate.
Data collection period | New interviewers lost | Experienced interviewers lost | Total interviewers lost | |||
---|---|---|---|---|---|---|
# | % | # | % | # | % | |
Fall 2017 | 10 | 14.5% | 44 | 13.1% | 54 | 13.4% |
Fall 2018 | 10 | 20.4% | 16 | 5.1% | 26 | 7.2% |
Fall 2019 | 4 | 21.0% | 20 | 7.4% | 24 | 8.3% |
Fall 2020 | 16 | 19.5% | 8 | 3.7% | 24 | 8.0% |
Fall 2021 | 30 | 31.6% | 27 | 11.3% | 57 | 17.1% |
Table 3-3 shows the overall attrition rate during the fall data collection period from 2017 through 2021. Note that the total fall 2021 attrition rate was 17.1 percent, the highest fall attrition rate in the past 5 years. Only the fall 2017 attrition rate was in double digits, and that was because some MEPS PSUs were retired due to changes in the new sampling frame.
Data collection period | New interviewers lost | Experienced interviewers lost | Total interviewers lost | |||
---|---|---|---|---|---|---|
# | % | # | % | # | % | |
2017 | 28 | 32.2% | 68 | 18.9% | 96 | 21.5% |
2018 | 36 | 48.0% | 49 | 14.2% | 85 | 20.2% |
2019 | 12 | 44.4% | 76 | 23.4% | 88 | 25.0% |
2020 | 55 | 45.0% | 62 | 23.0% | 117 | 30.0% |
2021 | 94 | 58.6% | 60 | 22.1% | 152 | 35.4% |
Table 3-4 shows the annual attrition rate across new and experienced interviewers from 2017 � 2021. The annual attrition rate for 2021 was 35.4 percent, the highest rate in the past 5 years. The extremely high rate of attrition among new hires can be attributed in large part to the continuation of the pandemic conditions, namely, a reliance on a high proportion of the interviewing being done by telephone. As noted above, the virtual training format seems to have made it much easier for new hires to quit mid-training.
The overall structure for training new interviewers in 2021 was a departure from prior years in order to remotely administer the training due to the COVID-19 pandemic. It began with a home study, followed by a remote training conducted over Zoom for government in early February 2021, and ending with completion of a two-part, post-classroom home study component. An attrition training was also conducted in June 2021 with minor alterations made through the experience gained during the prior session.
Pre-Training Activities. This package included a project laptop and an interactive self-paced workbook with exercises and online modules including videos and quizzes administered through Westat’s Learning Management System (LMS). The LMS generated regular reports, allowing home office and field management staff to monitor the completion of each trainee’s home study. New hires received their home study package early enough to complete the package before the remote training, but not so early that their introduction to important study concepts and project terminology would degrade before the remote training. The attrition training added additional practice with the Zoom platform prior to the remote training.
Remote Training. The usual 7½-day training format for in-person training was extended by 1 day, to accommodate the completion of both synchronous and asynchronous content. During the June attrition training, trainees were given the weekend off to attend to asynchronous content that had not been completed and address personal needs that were impacted by the remote approach. Any synchronous content accommodated trainees from the East Coast to the West Coast; therefore, the training day hours were from 12 PM through 5:30 PM EST for synchronous content.
Training sessions used a “block” approach to the training, with each training day consisting of a block of synchronous training and a block of asynchronous training. Trainees had synchronous training for some portion of each training day. Trainees completed required asynchronous blocks prior to the corresponding synchronous blocks.
For the 8½ days of project-specific training, each trainee was assigned to one of seven training classrooms (three for the June attrition training) staffed by a primary and support trainer, one or two classroom runners, and a Zoom host. The selection of trainers for the 2021 new hire trainings was based on several criteria including experience training with the CAPI instrument, overall project knowledge, and prior training experience. Prior to remote training, all training and support staff received a training on the remote platform; the associated technologies; and the content, activities, and procedures associated with remote training.
The training sessions used a variety of formats for presenting material, including lecture, question-and-answer interactions, written exercises, group discussion of problems and resolutions, and activities in which trainees were required to seek answers by consulting project resource materials. In addition, full and “mini” mock interviews (or “mocks”) and dyad role-plays were used throughout the training, and were central to training on both the mechanics and substance of the CAPI instrument.
Mocks are scripted interviews usually led by a classroom trainer who serves as both trainer and “respondent” while trainees take turns as the interviewer. Full mocks present the entire interview from Re-enumeration through Closing, while a “mini” mock relies on preloaded data to allow the training to begin at the desired questionnaire section. For the remote training, the mocks were delivered in one of three ways: demonstration, simulation, and teleconference.
Mock 1 (Round 1) was demonstrated in a synchronous session, with trainers displaying the CAPI screens and trainees reading the questions from the screen and calling out the appropriate keyboard response to the questions.
Mock 2 (Round 3) was posted on the LMS as an interactive CAPI simulation, with respondent answers coded into the simulation. Although the simulation looked and behaved like the CAPI instrument, corrective feedback was given immediately when the trainee coded incorrectly.
Mock 3 (Round 5) was administered via teleconference call led by an experienced trainer with additional support for troubleshooting. Teleconference allowed for additional hands-on CAPI practice for trainees and gave the trainer the opportunity to evaluate trainee performance.
Mini-mocks and materials on the IMS were presented in one of three modes: synchronous training in the virtual classroom, CAPI simulation hosted on the LMS, and independent practice from hard-copy materials to allow for hands-on CAPI/IMS practice.
Dyads paired trainees in a virtual breakout room to conduct an interview with one trainee playing the role of interviewer, and the other using a script to play the respondent. Each dyad pair was observed by a dyad observer, either a field supervisor or other training staff. Dyads are an effective tool for reinforcing questionnaire concepts and building interviewer confidence in administering the instrument. They also provide trainers with an opportunity to assess each trainee’s interviewing skills and mastery of the questionnaire application.
The remote training component maintained the emphasis on interviewer behaviors and interviewing techniques that facilitate complete and accurate reporting. Trainers were instructed to reinforce good interviewing behaviors during mock interviews. Good interviewing behaviors include reading questions verbatim, training respondents to use records to aid recall, actively engaging respondents in the use of show cards, and using active listening and probing skills. Trainers called attention to instances in which interviewers demonstrated such behaviors. To enhance trainee awareness of behaviors that affect data quality, dyad scripts included instructions to take a “time-out” at certain items in the interview to highlight relevant data quality issues.
In the past, scripted lab material had been provided to trainers and trainees for in-person lab practice. Often, trainees who wanted additional CAPI practice would take the scripts with them to work on independently. For the remote training, Westat offered some hard-copy scripted materials to all trainees as required independent practice.
One hundred eleven new hires successfully completed the main training, and 36 successfully completed the attrition training.
Bilingual training followed a similar format to in-person training. Bilingual trainees participated in a 4-hour block of training on the last half-day of training. Trainees completed a Round 3 dyad in Spanish. The same format for dyads used in the main training was applied to bilingual training. Trainees divided into breakout rooms to complete the dyad with training staff visiting the breakout rooms to ensure good interviewing behaviors and an understanding of the CAPI instrument. Additionally, trainees used the breakout room approach to practice refusal conversion in Spanish. Eight new interviewers successfully completed 2021 bilingual training and four new interviewers completed the bilingual attrition training.
Post-Remote Training Activities. The post-classroom home study was administered in two parts for the main training and combined into one part for the attrition training (to allow trainees to complete the home study prior to launch of the fall rounds). The first component was distributed on the last day of remote training, and new interviewers had to have successfully completed it before beginning fieldwork. It contained an interactive exercise in BFOS Secure Messaging (BSM) and completion of a mini-mock with a proxy respondent.
The home study also included a memo from the Field Director reviewing their tasks in preparation to interview, and it provided an “early work period” documentation form to assist them in setting up a work plan with their supervisor and completing tasks in a timely manner. At the same time, all field supervisors received a memo from the Field Director outlining their role in the post-classroom training through the setting of clear expectations, support, and ongoing training to their interviewers.
In addition to the home study, field supervisors engaged in additional post-training activities with new hires. New hires sat in on the report call of an experienced field interviewer and also reviewed assigned cases to report to their supervisor the best contact strategy for each. Field managers and field supervisors coordinated and implemented a mentoring/buddy plan that paired new hires with experienced FIs.
The new interviewers received the second component of the post-classroom home study about 6 weeks after the remote training. This component included both hard-copy materials as well as modules in the electronic LMS. This last component provided interviewers with additional training on respondent cooperation and participation in record-keeping activities. It also provided training on several important Re-enumeration topics and student Rus, and it reinforced interviewer practices related to collecting quality data.
Spring 2021 Round 1/3/5/7 Home Study. The Round 1/3/5/7 home study in December 2020 followed established formats but was expanded to accommodate the introduction of the MEPS iPhone and mFOS (mobile Field Operating System), the extension of the rounds, telephone interviewing procedures, COVID-19 Mitigation protocols, the SDOH SAQ, and the new telehealth event type. The 3-hour self-paced program contained an instructional memo, independent CAPI practice, iPhone training, and a quiz.
In-Person Refresher Training. Due to the COVID-19 pandemic, the refresher training scheduled for August 2021 was canceled.
Fall 2021 Round 2/4/6/8 Home Study. The Round 2/4/6/8 home study in July 2021 followed established formats. The 1½-hour self-paced program contained an instructional memo, example materials, and a quiz. Topics included the extension of the rounds in response to the COVID-19 pandemic, additional training on telephone interviewing and the use of the telephone-interviewing website for respondents, COVID mitigation protocols, and follow-up cost-sharing document collection. New interviewers hired in the spring were required to complete a mock interview with their supervisor, field manager, or designated senior interviewer before beginning the fall rounds of data collection.
Weekly Newsletter. In 2021, MEPS continued offering its field interviewer newsletter in a weekly format. The newsletter allows for additional training opportunities in a concise format and the ability to deliver content as needed to the field. Topics include CAPI questionnaire topics, procedural content, and answers to field interviewer questions.
This chapter describes the MEPS-HC data collection operations and provides selected results for the eight rounds of MEPS-HC interviewing conducted in 2021. Selected comparisons to results of prior years are also presented. Tables showing results for all years of the study are provided in the Appendix.
MEPS data collection management relies on a set of interrelated systems and procedures designed to accomplish three goals: efficiency, data quality, and cost containment. The systems include the Basic Field Operating System (BFOS), which facilitates case management through case assignment, case status and hours reporting, data quality reporting, and interviewer efficiency. Related systems include the computer-assisted recorded interview (CARI) system and the MEPS supervisor dashboard, which was placed into production in 2018. The CARI system allows for review of recordings for selected interview items to assist in the assessment of interviewer performance and question assessment. The MEPS supervisor dashboard provides views into daily and weekly management tasks related to the tracking of hours per complete, key alerts from casework in the field, the management of weekly production goals, and a number of metrics designed to facilitate weekly field calls with interviewers regarding hours worked, production, and interview quality. These tools, along with the implementation of models designed to identify cases with a higher propensity for completion, as well as on-hold procedures designed to prevent the overwork of cases in the field, form a comprehensive framework for the management of MEPS data collection.
Due to the ongoing COVID-19 pandemic, the procedures followed in the 2021 data collection differed greatly than those of years prior to 2020.
As in prior years, respondent contact materials provided respondents with the link to the MEPS website (www.meps.ahrq.gov); a toll-free number to Alex Scott, a study representative at Westat; and the link to the Westat website (www.westat.com). Calls received from the Alex Scott line were logged into the call-tracking system and the appropriate supervisor notified so that he/she could take the proper course of action.
The advance contact calls to Panel 26 Round 1 households were made by a subset of the experienced MEPS interviewers.
Typically, for Round 1 households, interviewers are instructed, with few exceptions, to make initial contact with the household in-person. For later rounds, interviewers are allowed to make initial contacts to set appointments by telephone, so long as the household had been cooperative in prior rounds. In response to COVID-19, all in-person interviewing ceased on March 17, 2020, and all contacts and interviews were conducted over the telephone. Prior to 2020, interviews conducted on the telephone represented only 5-8 percent of interviews. Procedures for telephone and text contacts were developed and implemented in 2020. These were adjusted in 2021 to instruct in-person contact where community spread was below a reasonable threshold. After initial contact, an in-person interview proceeded as scheduled when both parties agreed; otherwise, a telephone interview was scheduled.
Procedures for collecting the medical and pharmacy authorization forms for the Medical Provider Component (MPC) and self-administered questionnaires (SAQs) underwent significant changes due to the pandemic. In 2020, interviewers mailed authorization forms to respondents and had them return them to the home office via business reply envelope (BRE). After the in-person interview, the forms were generated and mailed by the interviewer from home shortly after the interview was completed, along with a BRE and the incentive check. The interviewer made a phone call to follow up within several days.
In 2021, the protocols from fall 2020 were expanded to address the steep decline in returned signed authorization forms experienced in 2020, including instituting a procedure for interviewers to place up to three reminder calls to ensure AFs were completed and returned or ready for pickup.
MEPS also continued the 2020 practice of contactless AF pickup instituted in the fall field period. MEPS continued a re-mail effort started in late fall 2020, mailing new sets of AFs to RUs where AFs were expected but not received. This was paired with the reminder calls for RUs with a larger number of AFs or hospital visits.
MEPS field managers, field directors, and the task leader for field operations continued to manage the field data collection in collaboration with the field supervisors, reinforcing the importance of balancing data quality with production and cost goals across regions. Field staff referred to this collaborative effort as the “No Region Left Behind” approach.
Throughout the year Westat continued to review data for all respondents reported to have been institutionalized in order to identify any individuals who might have been inappropriately classified and, as a result, treated as out of scope for MEPS data collection.
Data Collection Schedule. The sequence for beginning the spring rounds of data collection, most recently adjusted in 2014, was maintained for the spring round of 2021. Data collection began with Round 5, followed by Round 3, and then Round 1. For the Round 1 respondents, the later starting date allowed several additional weeks of elapsed time in which respondents could experience healthcare events to report in their Round 1 interview, with these additional events giving them a more realistic understanding of what to expect in the subsequent rounds of the study. To maintain the highest levels of quality of MEPS data, a decision was made to extend Panels 23 and 24 to nine rounds; therefore, there was no exit round in 2021.
The field period dates for the eight rounds conducted in 2021 are shown in Table 4-1.
Round | Dates | No. of weeks in round |
---|---|---|
1 | January 24 � July 14 | 24 |
2 | July 28 � December 7 | 19 |
3 | January 17 � June 15 | 21 |
4 | July 21 � December 7 | 20 |
5 | January 10 � May 15 | 18 |
6 | July 28 � December 7 | 19 |
7 | January 10 � May 15 | 18 |
8 | July 21 � December 7 | 20 |
Data Quality (DQ) Monitoring. The MEPS DQ field monitoring system and procedures allowed supervisors and field managers to identify interviewers whose work deviated from quality standards and who might need additional coaching on methods for getting respondents to more completely report their healthcare events. CARI review was further integrated into weekly monitoring activities with supervisors listening to portions of roughly 1,000 interviews per field period. These reviews were used to reinforce positive interviewing behaviors and techniques; in addition, listening to CARI gave field supervisors direct exposure to interviewing behaviors that needed to be addressed. In some cases, CARI recording results were such that interviewers were instructed to stop working until they could receive some re-training, including administering a practice interview to their field supervisor. This effort was supported by DQ alerts built into the supervisor dashboard to identify possible DQ issues related to record use and event entry. Supervisors investigated these issues and retrained when necessary.
Case Potential Listing. The project continued the use of a model predicting a completed interview from a given case (“propensity to complete”) relative to other pending cases in a region. The model is designed to identify cases with a high likelihood of completion at that point in the field period relative to other pending cases. The model is dynamic and is updated weekly based on the specific conditions for pending cases at that time. The model was tested in 2019 to determine if updates were necessary to better fit the data; however, the existing model remains well-suited to current interview conditions and remains in effect even for telephone interviews.
Information from this model is integrated into BFOS (the system used for case management), providing propensity to complete as part of a comprehensive view of a case for a given week. Supervisors were to instruct interviewers�in the absence of other field information that would dictate otherwise�to attempt these cases during the next production week. Table 4-2 illustrates the potential categories used to classify cases on a weekly basis to promote field efficiency.
Potential categories for pending MEPS cases |
---|
High potential (unworked) |
High potential (worked) |
Appointment |
Low potential |
Low potential refusal |
Remainder |
Locating |
Table 4-3 provides an overview of the data collection results for Panels 20 through 26, showing sample sizes, average interviewer hours per completed interview, and response rates. Table 4-4 shows the final response rates a second time, reformatted to facilitate by-round comparisons across Panels and years. Both tables display the additional Round 7 and 8 data new for 2021.
Panel/round | Original sample | Split cases (movers) | Student cases | Out-of-scope cases | Net sample | Completes | Average interviewer hours/complete | Response rate (%) | Response rate goal | |
---|---|---|---|---|---|---|---|---|---|---|
Panel 20 | Round 1 | 10,854 | 496 | 85 | 117 | 11,318 | 8,318 | 12.5 | 73.5 | 80 |
Round 2 | 8,301 | 243 | 39 | 22 | 8,561 | 7,998 | 8.3 | 93.4 | 95 | |
Round 3 | 7,987 | 173 | 17 | 26 | 8,151 | 7,753 | 6.8 | 95.1 | 96 | |
Round 4 | 7,729 | 161 | 19 | 31 | 7,878 | 7,622 | 7.2 | 96.8 | 97 | |
Round 5 | 7,611 | 99 | 13 | 23 | 7,700 | 7,421 | 6.0 | 96.4 | 98 | |
Panel 21 | Round 1 | 9,851 | 462 | 92 | 89 | 10,316 | 7,674 | 5.9 | 74.4 | 80 |
Round 2 | 7,661 | 207 | 32 | 17 | 7,883 | 7,327 | 8.5 | 92.9 | 95 | |
Round 3 | 7,327 | 166 | 14 | 19 | 7,488 | 7,043 | 7.2 | 94.1 | 96 | |
Round 4 | 7,025 | 119 | 14 | 20 | 7,138 | 6,907 | 7.0 | 96.8 | 97 | |
Round 5 | 6,914 | 42 | 8 | 34 | 6,930 | 6,778 | 5.9 | 97.8 | 98 | |
Panel 22 | Round 1 | 9,835 | 352 | 68 | 86 | 10,169 | 7,381 | 12.8 | 72.6 | 80 |
Round 2 | 7,371 | 166 | 19 | 11 | 7,545 | 7,039 | 8.5 | 93.3 | 95 | |
Round 3 | 7,071 | 100 | 12 | 19 | 7,164 | 6,808 | 6.7 | 95.0 | 96 | |
Round 4 | 6,815 | 91 | 13 | 18 | 6,901 | 6,672 | 6.8 | 96.7 | 97 | |
Round 5 | 6,670 | 35 | 7 | 12 | 6,700 | 6,584 | 5.3 | 98.3 | 98 | |
Panel 23 | Round 1 | 9,960 | 193 | 46 | 110 | 10,089 | 7,351 | 12.5 | 72.9 | 80 |
Round 2 | 7,387 | 106 | 14 | 15 | 7,492 | 6,960 | 8.2 | 92.9 | 95 | |
Round 3 | 6,987 | 102 | 11 | 18 | 7,082 | 6,703 | 6.1 | 94.6 | 96 | |
Round 4 | 6,704 | 74 | 10 | 12 | 6,776 | 6,522 | 6.6 | 96.2 | 97 | |
Round 5 | 6,503 | 34 | 4 | 5 | 6,536 | 6,383 | 5.3 | 97.7 | 98 | |
Round 6 | 6,498 | 90 | 10 | 18 | 6,480 | 5,120 | 4.8 | 79.0 | 90 | |
Round 7 | 5,176 | 36 | 5 | 6 | 5,170 | 4,513 | 5.2 | 87.3 | 85 | |
Round 8 | 4,558 | 27 | 3 | 10 | 4,548 | 3,984 | 5.8 | 87.6 | 80 | |
Round 9 | 90 | |||||||||
Panel 24 | Round 1 | 9,976 | 153 | 43 | 82 | 10,090 | 7,186 | 11.8 | 71.2 | 80 |
Round 2 | 7,211 | 98 | 19 | 5 | 7,323 | 6,777 | 7.9 | 92.5 | 95 | |
Round 3 | 6,812 | 76 | 9 | 7 | 6,890 | 6,289 | 6.0 | 91.3 | 96 | |
Round 4 | 6,335 | 44 | 4 | 13 | 6,370 | 5,446 | 5.1 | 85.5 | 97 | |
Round 5 | 5,510 | 31 | 4 | 15 | 5,495 | 4,770 | 5.3 | 86.8 | 85 | |
Round 6 | 4,816 | 22 | 8 | 8 | 4,808 | 3,959 | 5.7 | 82.3 | 80 | |
Round 7 | 87 | |||||||||
Panel 25 | Round 1 | 10,008 | 184 | 38 | 78 | 10,152 | 6,265 | 9.6 | 61.7 | 80 |
Round 2 | 5,907 | 49 | 14 | 12 | 5,958 | 4,677 | 5.5 | 78.5 | 95 | |
Round 3 | 5,191 | 38 | 5 | 2 | 5,189 | 4,230 | 6.1 | 81.5 | 80 | |
Round 4 | 4,314 | 40 | 10 | 7 | 4,307 | 3,685 | 7.3 | 85.6 | 97 | |
Round 5 | 85 | |||||||||
Panel 26 | Round 1 | 9,674 | 160 | 29 | 68 | 9,795 | 5,882 | 11.1 | 60.1 | 70 |
Round 2 | 6,047 | 83 | 11 | 2 | 6,045 | 4,799 | 9.0 | 79.4 | 95 | |
Round 3 | 83 |
Round 1 | Round 2 | Round 3 | Round 4 | Round 5 | Round 6 | Round 7 | Round 8 | |
---|---|---|---|---|---|---|---|---|
2012 | ||||||||
Panel 17 | 78.2 | 94.2 | ||||||
Panel 16 | 96.1 | 97.3 | ||||||
Panel 15 | 98.2 | |||||||
2013 | ||||||||
Panel 18 | 74.2 | 92.9 | ||||||
Panel 17 | 95.2 | 95.5 | ||||||
Panel 16 | 97.6 | |||||||
2014 | ||||||||
Panel 19 | 71.8 | 93.6 | ||||||
Panel 18 | 94.5 | 97.1 | ||||||
Panel 17 | 98.5 | |||||||
2015 | ||||||||
Panel 20 | 73.5 | 93.4 | ||||||
Panel 19 | 94.7 | 96.7 | ||||||
Panel 18 | 98.4 | |||||||
2016 | ||||||||
Panel 21 | 74.4 | 93.0 | ||||||
Panel 20 | 95.1 | 96.8 | ||||||
Panel 19 | 98.3 | |||||||
2017 | ||||||||
Panel 22 | 72.6 | 93.3 | ||||||
Panel 21 | 94.1 | 96.8 | ||||||
Panel 20 | 96.4 | |||||||
2018 | ||||||||
Panel 23 | 72.9 | 92.9 | ||||||
Panel 22 | 95.0 | 96.7 | ||||||
Panel 21 | 97.8 | |||||||
2019 | ||||||||
Panel 24 | 71.2 | 92.5 | ||||||
Panel 23 | 94.6 | 96.2 | ||||||
Panel 22 | 98.3 | |||||||
2020 | ||||||||
Panel 25 | 61.7 | 78.5 | ||||||
Panel 24 | 91.3 | 85.5 | ||||||
Panel 23 | 97.7 | 79.0 | ||||||
2021 | ||||||||
Panel 26 | 60.1 | 79.4 | ||||||
Panel 25 | 81.5 | 85.6 | ||||||
Panel 24 | 86.8 | 82.3 | ||||||
Panel 23 | 87.3 | 87.6 |
Of the data collection rounds conducted in 2021, the response rates for most rounds showed a similar pattern to 2020, a decline when compared to the rates from 2018 and 2019. With the shift to telephone data collection continuing throughout 2021, the Round 1 response rate was seriously impacted.
Other rounds suffered similar relative declines based on the lack of in-person opportunity. Because of this decline, a decision was made to extend Panels 23 and 24 to nine rounds to maintain the sample.
While not as extensive as in 2020, hours per complete across each Panel/round were lower than pre-pandemic years due to the large amount of telephone work. The biggest impact was seen in Round 1, where the average was 11.1 hours per complete compared with 11.7 hours (over the prior 4 years prior to the pandemic).
Components of Response and Nonresponse
Table 4-5 summarizes components of nonresponse associated with the Round 1 households by Panel beginning in 2016. As the table shows, prior to 2020 the components of nonresponse other than refusals�the “not located” and “out of scope” categories�remained relatively stable; however, in 2021, the “other nonresponse” and “not located” categories showed a significant increase. While there have been relative decreases in these categories in 2021, they remain elevated in comparison to 2019 and earlier. The larger year-to-year changes are reflected in the percentage of refusals, whereby increases and decreases in the percentage of refusals align closely with corresponding decreases and increases in the completion rate.
2016 P21R1 |
2017 P22R1 |
2018 P23R1 |
2019 P24R1 |
2020 P25R1 |
2021 P26R1 |
|
---|---|---|---|---|---|---|
Total sample | 10,405 | 10,255 | 10,199 | 10,172 | 10,230 | 9,863 |
Out of scope (%) | 0.9 | 0.8 | 1.1 | 0.8 | 0.8 | 0.7 |
Complete (%) | 74.4 | 72.6 | 72.9 | 70.6 | 61.2 | 59.6 |
Nonresponse (%) | 25.6 | 27.4 | 27.1 | 28.6 | 38.0 | 39.7 |
Refusal (%) | 20.2 | 21.8 | 22.4 | 24.0 | 28.7 | 31.2 |
Not located (%) | 3.7 | 3.9 | 3.1 | 3.1 | 3.2 | 4.3 |
Other nonresponse (%) | 1.7 | 1.7 | 1.7 | 1.5 | 6.1 | 4.2 |
Tables 4-6 through 4-13 summarize results for additional aspects of the 2021 data collection. Because Round 1 is the most difficult of all the rounds, the presentation focuses primarily on Panel 26, Round 1.
2016 P21R1 |
2017 P22R1 |
2018 P23R1 |
2019 P24R1 |
2020 P25R1 |
2021 P26R1 |
|
---|---|---|---|---|---|---|
Original NHIS sample (N) | 9,851 | 9,835 | 9,839 | 9,864 | 9,866 | 9,509 |
Percent complete in NHIS | 77.6 | 81.0 | 80.4 | 84.2 | 89.3 | 85.3 |
Percent partial complete in NHIS | 22.4 | 19.0 | 19.6 | 15.8 | 10.7 | 14.7 |
Percent complete for NHIS completes | 77.3 | 75.4 | 75.4 | 73.5 | 63.5 | 63.1 |
Percent complete for NHIS partial completes | 64.8 | 62.0 | 63.6 | 60.3 | 46.8 | 44.1 |
Note: Figures shown are based on original NHIS sample and exclude RUs added to the sample as “splits” and “students.”
NHIS Completion Status
Each year the MEPS sample includes a number of households classified in the NHIS as “partial completes,” in which the interviewer was able to complete part, but not all, of the full NHIS interview. Given the NHIS redesign implemented in 2018, the partial completes included in the 20 MEPS sample included some cases that completed only the roster module of the NHIS. The MEPS experience has been that for many of these NHIS cases, the difficulty experienced by the NHIS interviewer carries over to the MEPS interview: the MEPS response rate for the NHIS partial completes is substantially lower than for the NHIS completes. As noted in Chapter 1, for the 2021 sample, AHRQ repeated the step taken since 2012 of sampling the NHIS partial completes in the “White/other” category at a lower rate than the NHIS completes.
The upper portion of Table 4-6 shows the proportion of partial completes in the sample over recent years. Across all domains, the proportion of the 2021 sample classified as partial complete was significantly lower than all the previous years shown on the table. The lower portion of the table shows the persistent and substantial difference in response rate between these two components of the sample. Among the cases originally delivered from the NHIS (that is, with new reporting units discovered during the MEPS interviewing excluded from the counts), the response rate for the NHIS partial completes has been around 12 percentage points fewer or less than that for the NHIS completes. In 2020, that difference jumped up to 16.7 percentage points, and there is a 19-point difference in 2021. In 2021, the proportion of partial completes is higher than 2020’s was, at 14.7 percent, which is closer to the 2019 level.
Sample Domain
Table 4-7 breaks out response information for the NHIS completes and partial completes by sample domain categories, including the veterans domain introduced in Panel 24. Table 4-7, unlike Table 4-6, does include reporting units added to the sample during Round 1 data collection; it shows the differential in response rates between the NHIS partial completes and full completes persisting across all of the domains. The difference across the full 2021 sample was 18.2 percentage points, with NHIS partial completes responding at a lower rate in all domains. Within the individual domains the difference between the response rate for the NHIS completes and the NHIS partials was greatest for the White/other domain � 22 percentage points.
Domain/NHIS status | Net sample (N) | Complete (%) | Refusal (%) | Not located (%) | Other nonresponse (%) |
---|---|---|---|---|---|
Asian | 638 | 54.9 | 34.7 | 5.5 | 3.9 |
NHIS complete | 523 | 57.6 | 33.8 | 5.2 | 3.4 |
NHIS partial complete | 115 | 42.6 | 44.3 | 7.0 | 6.1 |
Black | 1,173 | 67.3 | 24.7 | 5.2 | 2.8 |
NHIS complete | 961 | 70.1 | 23.3 | 11.9 | 2.4 |
NHIS partial complete | 212 | 54.2 | 31.1 | 9.9 | 4.7 |
Hispanic | 1,483 | 65.1 | 26.6 | 4.7 | 3.6 |
NHIS complete | 1,192 | 67.6 | 24.7 | 4.4 | 3.4 |
NHIS partial complete | 291 | 54.6 | 34.4 | 6.2 | 4.8 |
White/other | 6,501 | 58.1 | 33.3 | 4.0 | 4.6 |
NHIS complete | 5,658 | 61.0 | 30.7 | 3.8 | 4.5 |
NHIS partial complete | 843 | 39.0 | 50.5 | 5.1 | 5.3 |
All groups | 9,795 | 60.0 | 31.4 | 4.4 | 4.2 |
NHIS complete | 8,334 | 62.8 | 29.2 | 4.0 | 4.0 |
NHIS partial complete | 1,461 | 44.6 | 44.0 | 6.2 | 5.2 |
Note: Includes reporting units added to sample as “splits” and “students” from original NHIS households, which were given the same “complete” or “partial complete” designation as the original household.
Refusals and Refusal Conversion
Table 4-8 summarizes the results of refusal conversion efforts by Panel. The rate of “ever refused” for RUs in Panel 26 increased to its highest level to 5.6 percent. The percentage of converted RUs for Round 1 rebounded slightly to 19.3 percent of cases converted from a low of 12.3 percent in 2020.
Panel | Net sample (N) | Ever refused (%) | Converted (%) | Final refusal rate (%) | Final response rate (%) |
---|---|---|---|---|---|
Panel 20 | 11,318 | 30.1 | 29.2 | 21.0 | 73.5 |
Panel 21 | 10,316 | 29.1 | 29.0 | 20.2 | 74.4 |
Panel 22 | 10,169 | 30.1 | 27.6 | 21.8 | 72.6 |
Panel 23 | 10,089 | 31.3 | 25.6 | 22.4 | 72.9 |
Panel 24 | 10,090 | 32.6 | 23.4 | 24.2 | 71.2 |
Panel 25 | 10,152 | 34.8 | 12.3 | 28.9 | 61.7 |
Panel 26 | 9,795 | 40.4 | 19.3 | 31.4 | 60.0 |
Tracing and Locating
Table 4-9 shows results of locating efforts for households that required tracking during the Round 1 field period by Panel. The percent of households that required some tracing in 2021 (11.3%) dropped 0.4 percent from 2020; the final rate of households that were not located after tracing efforts was its highest point since 2015, although the 2021 “not located” rate was still within the range of 3.0-4.3 percent for the 7-year period shown in the table.
Panel | Total sample (N) | Ever traced (%) | Not located (%) |
---|---|---|---|
Panel 20 | 11,435 | 14.0 | 4.3 |
Panel 21 | 10,405 | 12.8 | 3.7 |
Panel 22 | 10,228 | 13.0 | 3.9 |
Panel 23 | 10,199 | 12.7 | 3.0 |
Panel 24 | 10,172 | 12.6 | 3.0 |
Panel 25 | 10,230 | 11.7 | 3.2 |
Panel 26 | 9,863 | 11.3 | 4.3 |
Interview Length
Table 4-10 shows the mean length (in minutes) for interviews conducted without interruption in a single session in Panels 20-26. Starting in 2020, with the pandemic shutdown, everything moved to telephone interviews; in 2021, a large number of interviews were still conducted by telephone, which took longer as interviewers had to read the show cards aloud, thus adding time to the interview.
Round | Panel 20 | Panel 21 | Panel 22 | Panel 23 | Panel 24 | Panel 25 | Panel 26 |
---|---|---|---|---|---|---|---|
Round 1 | 76.4 | 75.5 | 79.9 | 78.1 | 79.5 | 89.0 | 92.9 |
Round 2 | 86.3 | 85.3 | 88.8 | 88.2 | 87.0 | 89.7 | 93.3 |
Round 3 | 89.7 | 93.4 | 93.0 | 92.6 | 98.5 | 100.0 | |
Round 4 | 80.5 | 82.7 | 84.3 | 86.8 | 86.2 | 93.2 | |
Round 5 | 85.3 | 76.0 | 78.8 | 78.7 | 97.1 | ||
Round 6 | 88.4 | 89.7 | |||||
Round 7 | 96.6 | ||||||
Round 8 | 90.1 |
Mean Contact Attempts Per Case
Table 4-11 shows mean contact attempts, by mode and NHIS completion status, for all cases in Round 1 of Panels 24-26. Overall, the number of contacts required per case in Panel 26 remains elevated compared to 2019 and earlier. The 2021 number of contacts is one attempt lower than in 2020. This increase is chiefly attributed to the challenges incurred by the COVID-19 pandemic and the shift to telephone interviewing. As in prior years, in Panel 26 the NHIS partial complete cases required substantially greater effort than the NHIS completes, roughly 0.8 additional in-person contacts per household.
Contact type | Panel 24, Round 1 | Panel 25, Round 1 | Panel 26, Round 1 | ||||||
---|---|---|---|---|---|---|---|---|---|
All RUs | Complete | Partial | All RUs | Complete | Partial | All RUs | Complete | Partial | |
N | 9,864 | 8,306 | 1,558 | 9,866 | 8,814 | 1,052 | 9,509 | 8,113 | 1,396 |
% of all RUs | 100 | 84.2 | 15.8 | 100 | 89.3 | 10.7 | 100 | 85.3 | 14.7 |
In-person | 5.5 | 5.4 | 6.3 | 2.6 | 2.5 | 2.6 | 2.4 | 2.3 | 3.1 |
Telephone | 1.3 | 1.2 | 1.6 | 9.7 | 9.5 | 11.6 | 8.8 | 8.7 | 9.8 |
Total | 7.3 | 7.1 | 8.5 | 14.4 | 14.1 | 17.0 | 13.1 | 12.8 | 14.9 |
During the Respondent Forms section of the MEPS CAPI interview, interviewers are prompted to ask respondents to sign the authorization forms (AFs) needed to conduct the MPC of MEPS. Authorization forms are requested for each unique person-provider pairing identified during the interviews as a source of care to a key member of the household. Medical provider AFs are requested for physicians seen in an office-based setting; for inpatient, outpatient, or emergency room care received in a hospital; for care received from a home health agency; for telehealth; and for certain stays in long-term care institutions. Pharmacy AFs are requested for each pharmacy from which a household member obtained prescription medicines.
Table 4-12 shows round-by-round signing rates for the medical provider AFs for Panels 19 through 26. Signing rates increased from 2020 but remained lower than in previous years.
Panel/round | Authorization forms requested | Authorization forms signed | Signing rate (%) | |
---|---|---|---|---|
Panel 19 | Round 1 | 2,189 | 1,480 | 67.6 |
Round 2 | 22,671 | 17,190 | 75.8 | |
Round 3 | 20,582 | 14,534 | 70.6 | |
Round 4 | 17,102 | 13,254 | 77.5 | |
Round 5 | 15,330 | 11,425 | 74.5 | |
Panel 20 | Round 1 | 2,354 | 1,603 | 68.1 |
Round 2 | 25,334 | 18,479 | 72.9 | |
Round 3 | 22,851 | 15,862 | 69.4 | |
Round 4 | 18,234 | 14,026 | 76.9 | |
Round 5 | 16,274 | 12,100 | 74.4 | |
Panel 21 | Round 1 | 2,037 | 1,396 | 68.5 |
Round 2 | 22,984 | 17,295 | 75.2 | |
Round 3 | 20,802 | 14,898 | 71.6 | |
Round 4 | 16,487 | 13,110 | 79.5 | |
Round 5 | 20,443 | 16,247 | 79.5 | |
Panel 22 | Round 1 | 2,274 | 1,573 | 69.2 |
Round 2 | 22,913 | 17,530 | 76.5 | |
Round 3 | 26,436 | 19,496 | 73.7 | |
Round 4 | 23,249 | 18,097 | 77.8 | |
Round 5 | 17,171 | 12,168 | 70.9 | |
Panel 23 | Round 1 | 1,982 | 1,533 | 77.3 |
Round 2 | 29,576 | 21,850 | 73.9 | |
Round 3 | 23,365 | 14,575 | 62.4 | |
Round 4 | 19,220 | 13,483 | 70.2 | |
Round 5 | 17,569 | 10,903 | 62.1 | |
Round 6 | 12,701 | 8,002 | 63.0 | |
Round 7 | 13,254 | 8,108 | 61.2 | |
Round 8 | 11,589 | 7,624 | 65.8 | |
Panel 24 | Round 1 | 2,285 | 1,306 | 57.2 |
Round 2 | 24,755 | 15,865 | 64.1 | |
Round 3 | 22,657 | 11,522 | 50.9 | |
Round 4 | 14,612 | 7,716 | 52.8 | |
Round 5 | 15,992 | 8,941 | 55.9 | |
Round 6 | 11,366 | 6,658 | 58.6 | |
Panel 25 | Round 1 | 3,110 | 1,242 | 39.9 |
Round 2 | 15,259 | 7,292 | 47.8 | |
Round 3 | 15,932 | 8,100 | 50.8 | |
Round 4 | 11,252 | 7,204 | 64.0 | |
Panel 26 | Round 1 | 2,432 | 1,151 | 47.3 |
Round 2 | 17,765 | 10,564 | 59.5 |
Calculation of the round-by-round collection rate for the medical provider authorization forms is based on all forms requested during a round. The rates calculated for Rounds 2-8 include forms fielded but not signed in an earlier round (nonresponse). Included as well were forms that were fielded in an earlier round and signed but rendered obsolete because the person had another health event with the provider after the date on which the original form was signed.
Table 4-13 shows signing rates for pharmacy authorization forms for Panels 19 through 26. Pharmacy authorization forms are requested in Rounds 2 through 9, with follow-up for nonresponse in subsequent rounds similar to that for medical provider authorization forms. The decline in signing rates for 2020 can be attributed to the move to telephone interviewing as a result of the COVID-19 pandemic. As with the medical provider authorizations forms, there was a slight increase in signing rate in 2021, but it remained lower than years prior to 2020.
Panel/round | Authorization forms requested | Authorization forms signed | Signing rate (%) | |
---|---|---|---|---|
Panel 19 | Round 2 | 10,749 | 8,261 | 76.9 |
Round 3 | 9,618 | 6,902 | 71.8 | |
Round 4 | 8,557 | 6,579 | 76.9 | |
Round 5 | 7,767 | 5,905 | 76.0 | |
Panel 20 | Round 2 | 12,074 | 8,796 | 72.9 |
Round 3 | 10,577 | 7,432 | 70.3 | |
Round 4 | 9,099 | 6,945 | 76.3 | |
Round 5 | 8,312 | 6,339 | 76.3 | |
Panel 21 | Round 2 | 10,783 | 7,985 | 74.1 |
Round 3 | 9,540 | 6,847 | 71.8 | |
Round 4 | 8,172 | 6,387 | 78.2 | |
Round 5 | 6,684 | 5,336 | 79.8 | |
Panel 22 | Round 2 | 10,510 | 7,919 | 75.4 |
Round 3 | 8,053 | 5,953 | 73.9 | |
Round 4 | 7,284 | 5,670 | 77.8 | |
Round 5 | 8,048 | 5,726 | 71.1 | |
Panel 23 | Round 2 | 8,834 | 6,514 | 73.8 |
Round 3 | 9,614 | 6,205 | 64.5 | |
Round 4 | 8,486 | 5,900 | 69.5 | |
Round 5 | 8,067 | 5,101 | 63.2 | |
Round 6 | 5,668 | 3,418 | 60.3 | |
Round 7 | 5,417 | 3,345 | 61.8 | |
Round 8 | 5,182 | 3,341 | 64.5 | |
Panel 24 | Round 2 | 10,265 | 6,676 | 65.0 |
Round 3 | 9,096 | 4,831 | 53.1 | |
Round 4 | 7,100 | 3,636 | 51.2 | |
Round 5 | 6,528 | 3,682 | 56.4 | |
Round 6 | 4,783 | 2,663 | 55.7 | |
Panel 25 | Round 2 | 6,783 | 3,180 | 46.9 |
Round 3 | 6,114 | 3,146 | 51.5 | |
Round 4 | 4,640 | 2,888 | 62.2 | |
Panel 26 | Round 2 | 6,961 | 4,105 | 59.0 |
Self-administered questionnaires (SAQs) are requested from key adult household members in Rounds 2 and 4. Forms that are not collected in Rounds 2 and 4 are requested again in Rounds 3 and 5. In fall 2021, SAQs were requested from Panel 24 Round 8 and Panel 24 Round 6 respondents as well. Table 4-14 shows both the round-specific response rates and the combined rates after the follow-up round was completed. See Section 4.5 for information about the Social Determinants of Health Self-Administered Questionnaire (SDOH SAQ) methodology and results.
Response rates have been declining over time, however. Notably, 2020 saw a significant drop in response rate as a result of telephone interviewing due to COVID-19. There was a slight increase for initial requests in 2021, but it remained relatively low. Overall procedures for the distribution and collection of hard-copy materials have not changed with the exception of additional concentrated follow-up. Additional evaluation is underway to understand and attempt to improve the hard-copy collection rates.
Panel/round | SAQs requested | SAQs completed | SAQs refused | Other nonresponse | Response rate (%) | |
---|---|---|---|---|---|---|
Panel 20 | Round 2 | 14,077 | 10,885 | 1,223 | 1,966 | 77.3 |
Round 3 | 2,899 | 1,329 | 921 | 649 | 45.8 | |
Combined, 2015 | 14,077 | 12,214 | 2,144 | 2,615 | 86.8 | |
Round 4 | 13,068 | 10,572 | 1,127 | 1,371 | 80.9 | |
Round 5 | 2,262 | 1,001 | 891 | 370 | 44.3 | |
Combined, 2016 | 13,068 | 11,573 | 2,018 | 1,741 | 88.6 | |
Panel 21 | Round 2 | 13,143 | 10,212 | 1,170 | 1,761 | 77.7 |
Round 3 | 2,585 | 1,123 | 893 | 569 | 43.4 | |
Combined, 2016 | 13,143 | 11,335 | 2,063 | 2,330 | 86.2 | |
Round 4 | 12,021 | 9,966 | 1,149 | 906 | 82.9 | |
Round 5 | 2,078 | 834 | 884 | 360 | 40.1 | |
Combined, 2017 | 12,021 | 10,800 | 2,033 | 1,266 | 89.8 | |
Panel 22 | Round 2 | 12,304 | 9,929 | 1,086 | 1,289 | 80.7 |
Round 3 | 2,287 | 840 | 749 | 698 | 36.7 | |
Combined, 2017 | 12,304 | 10,769 | 1,835 | 1,987 | 87.5 | |
Round 4 | 11,333 | 8,341 | 1,159 | 1,833 | 73.6 | |
Round 5 | 2,090 | 811 | 896 | 383 | 38.8 | |
Combined, 2018 | 11,333 | 9,152 | 2,055 | 2,216 | 80.8 | |
Panel 23 | Round 2 | 12,349 | 8,711 | 1,364 | 1,289 | 70.5 |
Round 3 | 2,364 | 819 | 907 | 638 | 34.6 | |
Combined, 2018 | 12,349 | 9,530 | 2,271 | 1,927 | 77.2 | |
Round 4 | 11,290 | 8,554 | 1,515 | 1,221 | 75.8 | |
Round 5 | 2,711 | 983 | 923 | 805 | 36.3 | |
Combined, 2019 | 11,290 | 9,537 | 2,438 | 2,026 | 84.5 | |
Round 6 | 8,537 | 4,732 | 682 | 3,123 | 55.4 | |
Round 7 | 3,229 | 1,123 | 707 | 1,399 | 34.8 | |
Combined, 2020 | 8,537 | 5,855 | 1,389 | 4,522 | 68.6 | |
Round 8 | 6,446 | 3,377 | 799 | 2,270 | 52.4 | |
Panel 24 | Round 2 | 12,027 | 8,726 | 1,641 | 1,660 | 72.6 |
Round 3 | 2,810 | 860 | 832 | 1,118 | 30.6 | |
Combined, 2019 | 12,027 | 9,586 | 2,473 | 2,778 | 79.7 | |
Round 4 | 9,257 | 4,247 | 786 | 4,224 | 45.9 | |
Round 5 | 4,224 | 1,476 | 838 | 1,910 | 34.9 | |
Combined, 2020 | 9,257 | 5,723 | 1,624 | 6,134 | 61.8 | |
Round 6 | 6,440 | 3,196 | 819 | 2,425 | 49.6 | |
Panel 25 | Round 2 | 8,109 | 3,555 | 529 | 4,025 | 43.8 |
Round 3 | 4,016 | 1,322 | 717 | 1,977 | 32.9 | |
Combined, 2020 | 8,109 | 4,877 | 1,246 | 6,002 | 60.1 | |
Round 4 | 6,089 | 3,309 | 850 | 1,930 | 54.3 | |
Panel 26 | Round 2 | 8,419 | 4,609 | 1,009 | 2,801 | 54.7 |
In Rounds 3 and 5, key adult household members who are reported as having been diagnosed with diabetes were asked to complete a short SAQ, the Diabetes Care Supplement (DCS). Forms not completed for pickup at the time of the interviewer’s visit were followed up by telephone in the latter stages of Rounds 3 and 5, but unlike the SAQ, there was no follow-up in the subsequent round for forms not collected in the round when first requested. Response rates for the DCS for Panels 18 through 25 are shown in Table 4-15. Completion rates for the DCS showed a modest but relatively steady decline over time. 2021 experienced a noticeable drop, both in requests and response rate.
Panel/round | DCSs requested | DCSs completed | Response rate (%) | |
---|---|---|---|---|
Panel 18 | Round 3 | 1,362 | 1,182 | 86.8 |
Round 5 | 1,342 | 1,187 | 88.5 | |
Panel 19 | Round 3 | 1,272 | 1,124 | 88.4 |
Round 5 | 1,316 | 1,144 | 87.2 | |
Panel 20 | Round 3 | 1,412 | 1,190 | 84.5 |
Round 5 | 1,386 | 1,174 | 84.9 | |
Panel 21 | Round 3 | 1,422 | 1,170 | 82.5 |
Round 5 | 1,481 | 1,123 | 75.8 | |
Panel 22 | Round 3 | 1,453 | 1,074 | 73.9 |
Round 5 | 1,348 | 1,018 | 75.5 | |
Panel 23 | Round 3 | 1,464 | 1,101 | 75.2 |
Round 5 | 1,350 | 933 | 69.1 | |
Round 7 | 1,018 | 648 | 63.7 | |
Panel 24 | Round 3 | 1,350 | 843 | 62.4 |
Round 5 | 1,082 | 599 | 55.4 | |
Panel 25 | Round 3 | 963 | 514 | 53.4 |
There is an increasing policy and research focus in the United States on determinants of health other than use of healthcare services. The Social Determinants of Health (SDOH) study was developed so that AHRQ could create a robust and readily available database. This data would contribute to the larger understanding of trends, and provide health system leaders, policymakers, researchers, and other stakeholders with data and information to improve health care quality and population health outcomes.
Social and behavioral factors play important roles in physical and mental health, though they have not been traditionally taken into account in the health care system. Some provider groups are now collecting information on these social and behavioral factors in order to understand determinants of health and design appropriate interventions. Adding measures of social and behavioral determinants of health to the MEPS will allow researchers to investigate the relationship between these measures and measures of healthcare use and health expenditures in a nationally representative sample of adults. The SDOH SAQ is a short supplemental survey about these social, environmental, and behavioral influences on health.
In developing this SAQ, AHRQ consulted with several experts in the area and used their expertise to identify questions that had already been tested and widely accepted. There was a total of 55 items in the SDOH supplement, all drawn from Federal surveys or from instruments that had been carefully validated. It was estimated that the survey would take 7 minutes to complete.
All cohorts of the MEPS Panel were included in this study�some were in their first round of data collection, whereas others were in later rounds. In addition to the MEPS household respondent, most other adults ages 18+ in the household were eligible to participate.
In spring 2021, the questions in the CAPI instrument for round 1, 3, 5, and 7 interviews identified RU members who were eligible to participate in the SDOH survey. This included most RU members age 18 or older or in age category 4, which includes people with an estimated age of 16 to 23 years old. This selection process resulted over 33,451 individuals across more than 19,300 households being invited to participate in SDOH.
A multimode web and paper approach was primarily selected for the SDOH SAQ to further protect respondents’ privacy due to the sensitive nature of some of the items, especially those about adverse childhood experiences. Web completion was the main mode with paper offered to those with barriers to internet access.
Contact Modes. During the spring 2021 CAPI interview, the respondent was asked to provide contact information for each RU member who was eligible to complete the SDOH SAQ. The CAPI interviewer requested each eligible person’s cell phone number, permission to send short message service (SMS) text messages, email address, first and last name if an alias was provided in the re-enumeration interview section, and the person’s preferred language for communications about the SDOH SAQ (i.e., English or Spanish). Depending on the information provided, we assigned each person to receive their survey invitation and reminders using that mode or combination of modes. Based on the information collected, each eligible person was assigned to one of five contact modes and protocols for receiving information about the SDOH SAQ, as displayed in Figure 4-1.
Figure 4-1. SDOH contact mode determination flowchart
If both an email and permission to text were provided, SDOH SAQ respondents would receive both an email and a SMS text message each time we prompted for survey completion. SDOH SAQ respondents for whom the respondent provided only SMS or email information received electronic invitations and reminders via that one contact mode. If the CAPI respondent did not provide either cell phone or email information, then the respondent was asked if that person had access to the internet. If the respondent indicated the person had the ability to access the internet, that person was assigned to receive a USPS mailing containing the study URL and a unique login. Interviewers provided a paper copy of the SDOH SAQ to the eligible persons whom the household respondent said could not or would not use the internet to complete a survey.
While all other supplements have been administered by paper, this approach was minimized for the SDOH SAQ due to the sensitive nature of some of the questions, such as domestic violence, and the possibility that one household member might see the responses of another member. Dashboard alerts enabled field supervisors and project management staff to monitor the degree to which interviewers were providing paper questionnaires to household members. Supervisors counseled outliers on the standard protocol for presenting paper as a last resort. The goal was to try to direct approximately 80 percent of potential SDOH SAQ respondents to the web. A $20 token of appreciation was offered for each RU member who completed the SDOH survey, regardless of the person’s contact mode or survey completion mode.
Communication Protocol. The SDOH survey communication protocol included several scheduled contacts, which ceased after a selected person submitted a completed web survey or the MEPS receipt staff received a completed paper survey. Figure 4-2 displays the schedule and types of communications for each contact mode.
Figure 4-2. Communication protocol for SDOH contact modes
For persons for whom SMS and email information was available, email was sent in the morning, and SMS in the evening on days 1, 3, and 7. Nonrespondents for the SDOH survey received a postcard reminder with the URL and PIN on day 14, and then if there was still no response, a paper version of the questionnaire was mailed on Day 28. A similar approach was followed if only SMS or only email was provided.
For those who had no email or SMS recorded but had internet access, we mailed a push to web letter on day 1, a postcard reminder on day 7, a reminder letter on day 14, and a final mailing with the paper questionnaire on day 28. If a paper copy of the questionnaire had been sent via FedEx after a telephone interview or left in the household during an in-person interview, a replacement paper questionnaire was sent on day 28.
Reduced staffing at Westat’s offices affected the schedule for mailings related to the survey. The campus was closed except for essential services, so mailings were prepared and sent out 3 days per week rather than 5 days per week. This had a small impact on the number of days between scheduled reminder mailings.
Contact Mode Assignment Results and Spring 2021 SDOH SAQ Response Rates. Figure 4-3 displays the differing response rates broken down by contact mode and interview number. Overall, we were able to collect both a cell phone number (with permission to send SMS messages) and an email for 47 percent of the sample. The longer a household had been participating in MEPS rounds, the more likely the respondent was to provide both pieces of information. Either email only or SMS only were collected for an additional quarter of the sample. There were few differences by Panel round. Only 2 percent of participants had no email or SMS provided, but the household respondent indicated that they had internet access. Finally, 24 percent of the SDOH SAQ sample received only a paper questionnaire. Those in their first round of interviewing were significantly more likely to be assigned to a paper version of the questionnaire than those in their last round. This may have been because they were less familiar with MEPS so the respondents were more reluctant to provide contact information, especially for other household members. Overall, we achieved a 66 percent conditional response rate (using the AAPOR 3 response rate). Response rates increased with each round of the interview, with the SDOH SAQ respondents belonging to the first round less likely to respond than those in their last round.
Figure 4-3. Response rate by contact mode and interview number
* p<.0001 � Participants contacted by SMS+Email, Email only, and SMS only were significantly more likely to complete the SDOH than those only receiving the Paper questionnaire.
** p<.0005 � Participants contacted by Letter w/URL were significantly less likely to complete the SDOH than those only receiving the Paper questionnaire.
A nearly 80 percent response rate was achieved when the CAPI respondent provided a person’s SMS and email information, highest among the Panels in later rounds. Response rates were lower when we had only one method of contact. Email performed better than SMS (66.2% vs. 55.6%), and less than a 40 percent response rate was achieved for the small number of SDOH respondents who were only sent USPS mailings. Finally, for cases where the interviewer either left behind a paper questionnaire or shipped it to the household via FedEx, we achieved only a 45 percent response rate, which is much lower than the usual MEPS SAQ.
Nonrespondent Follow-up During Fall 2021 and Overall Response Rate. Paper-only nonresponse follow-up took place during the fall data collection cycle in July through December 2021. The follow-up phase did not include any additional household members. We followed up with 8,379 people who were eligible to complete SDOH SAQ during the spring data collection but who did not do so.
For each SDOH SAQ nonrespondent, the interviewer either distributed a paper questionnaire during an in-person interview or shipped a paper questionnaire to the household via FedEx after a telephone interview. Respondents were offered a $20 per person, post-collection incentive, just as was done during the spring cycle. During the fall, there were 2,856 completed paper SDOH SAQs, which was a 34 percent completion rate. Table 4-16 displays the total number and percentage of SDOH SAQs completed during the spring versus fall cycles, and the impact of the fall follow-up on the overall response rate for the SDOH supplemental survey.
Completed SDOH SAQs | % of completed SDOH SAQs | Overall response rate | |
---|---|---|---|
Spring 2021 completed SDOH SAQs (web and paper) | 22,037 | 88.50% | 65.90% |
Fall 2021 completed SDOH SAQs (paper) | 2,856 | 11.50% | ----* |
Total completed SDOH SAQs (web and paper) | 24,893 | 100.00% | 74.40% |
* Of the 8,379 eligible SDOH SAQs distributed in the fall, the response rate was 34 percent.
Interviewer performance was monitored through validation case review using GPS, CARI, and telephone interviews. The purpose of validation was to verify that the correct individual was contacted for the interview and that the interview was conducted according to MEPS-approved procedures.
Generally, all completed cases were validated by first examining the GPS data stored and encrypted on the laptop. Then, if the case could not be properly validated due to missing data or the GPS information could not be verified to show the interviewer at the respondent address or another documented location at the time of the interview, the case was then reviewed in the CARI system.
However, beginning in mid-March of 2020 and continuing throughout much of 2021, the majority of cases were completed by telephone due to the COVID-19 pandemic. Therefore, GPS data could not be relied on as heavily for validation and CARI review became the main mode of validation in 2021. If a case could not be validated in CARI due to poor quality or missing CARI data, the case was referred for telephone validation. All interviews completed in less than 30 minutes were referred for telephone validation.
In the spring 2021 rounds, 18,037 cases (93 percent of the completed cases) were validated: 84.1 percent of the cases were validated using CARI and only 2.6 percent of cases were validated with GPS due to minimal in-person interviews. In spring 2021,. a minimum of 41 percent of an interviewer’s completes were validated, with an average of 75 percent of each interviewer’s completes validated. In the fall 2021 rounds, 16,144 cases (98.3 percent of the completed cases) were validated: 82.2 percent of cases were validated by CARI and telephone and 15.8 percent were validated with GPS due to increased in-person activity. Staff validated an average of 89 percent of an interviewer’s completes.
In addition to validating cases, MEPS field supervisors and managers typically conduct observations as part of a comprehensive mentoring process. Generally, MEPS uses technical solutions in place of in-person observations; however, there are specific needs met by specialized observation. As much as possible, observations are conducted in the early weeks of data collection so that problems can be detected and corrected as quickly as possible and interviewers are given feedback on ways to improve specific interviewing skills. While CARI offers a high-quality portal for evaluating interviewers on question administration, observations are still a critical tool, particularly of newly hired staff. Compared with the observation process, CARI and other report mechanisms do not allow for assessment of the full range of interviewer skills, including respondent contact, trip planning, gaining cooperation, and interviewer-respondent interactions. In addition, the observer serves as an on-site resource in situations where remedial training is necessary. Observation forms are processed and reviewed at the home office to determine the need for individual and field-wide follow-up on specific skills. In 2021, in-person observations were suspended due to the COVID-19 pandemic. For quality control, additional CARI related to data quality and follow-up with field interviewers regarding COVID-19 specific procedures were effective alternatives.
To comply with the requirement of reporting incidents involving loss or theft of hard-copy materials with respondent personally identifiable information (PII) or laptops, field staff continued to use an automated loss reporting system (ILRS) to report incidents. Incidents were investigated, updates were sent to AHRQ and MEPS staff who received the initial automated ILRS notification, and results were recorded in an annual MEPS PII Log. A security incident report was submitted to the Westat IRB for each confirmed incident.
A total of 25 incidents of lost or stolen laptops/iPhones or hard-copy PII were reported in 2021. Of those reported incidents, two involved MEPS laptops and iPhones that were reported stolen or lost. In one case, the laptop and iPhone were eventually returned by the interviewer; in the other case, the Portland, Oregon, Police Department never recovered the laptop and iPhone that were stolen from a rental car when a MEPS interviewer was on a production travel trip. The password-protected laptops were shut down at the time of the loss. Since MEPS laptops are full disc encrypted, respondent identity was not at risk.
Twenty-three incidents reported suspected or confirmed loss of hard-copy materials with respondent PII loss or breach of confidentiality. Thirteen of the 23 reported hard-copy losses were located: hard-copy documents in nine incidents were intact and uncompromised; in the other four incidents, the hard-copy documents were intact but there had been a breach of confidentiality. Following extensive searches, no documents were recovered in the other 10 reported losses. Included among the PII hard-copy losses were AFs, SAQs, Preventive Care Self-Administered Questionnaires (PSAQs), Diabetes Care Supplements, and SDOH SAQs. All households with PII loss were notified. The AHRQ Information Security Manager alerted the HHS Privacy Incident Response Team (PIRT) of all MEPS-reported PII incidents.
The home office supports the data collection effort in several important ways. One phase of activity supports the launch of each new round of data collection; another phase supports the field operation while data collection is in progress. These two phases of activity are described in this chapter.
Hard-copy materials were assembled prior to data collection for cases being fielded in Rounds 2 through 8. Clerical staff created an RU folder for each case being fielded and inserted any authorization forms and SAQs that were printed for the case. There are no hard-copy case materials generated for Round 1 cases so RU folders were not created prior to data collection for Round 1 cases.
Supervisors received a Supervisor Assignment Log listing all of the cases to be released in their region for each wave of cases. For the first wave of each round, supervisors used this log to assign cases to their interviewers. They entered the ID of the interviewer to be assigned each case and sent the log back to the home office. Home office staff then shipped the RU folders directly to the interviewers for the first wave and to regional clerks to distribute to the FIs in later waves. A file with the assignments was also sent to programming staff to make the electronic assignments in the BFOS field management system.
For later waves, the prepared RU folders were sent to the field supervisors, who made the electronic assignments in their Supervisor Management System (SMS) and shipped the hard-copy materials to their interviewers.
Prior to the start of data collection for each period, interviewers connected remotely to the home office to download the CAPI software update for the upcoming rounds and received a home study training package to prepare them for interviewing. Field interviewers also received a replenishment of supplies at the start of the rounds.
Advance mailings to all respondent households were prepared and mailed by the home office staff. Addresses were first standardized and sent through the National Change of Address (NCOA) database to obtain the most current addresses for mailing. Any mail returned as undeliverable was recorded and the appropriate supervisor was notified. Requests to re-mail the Round 1 advance package to households who reported not receiving it were prepared and mailed by home office staff.
Respondent Contacts. Respondent contacts are an important component of home office support for the MEPS data collection effort. Printed materials mailed to respondents contain an email address and toll-free telephone number that respondents can use to contact the project with questions, with requests to make or to cancel interview appointments, or to decline participation in the study. Home office staff received and initiated the response to all respondent contacts. They forward information received from respondent calls to the field supervisors, who initiated the appropriate follow-up and informed the home office of the results of their follow-up within 24 hours of notification. Table 5-1 shows the number and percent of RUs who made calls to the respondent hotline in the spring and fall rounds of 2017�2021. There was a significantly higher percentage of calls to the hotline in both spring and fall 2020. In spring 2021, the percentage of calls to the hotline was more in line with years prior to 2020. The percentage of calls dropped in fall 2021 compared to fall 2020 but was still higher than in previous years.
Original sample size | Number of calls | Calls as a percent of sample size | |
---|---|---|---|
Round 1 | |||
2017 – Panel 22 Round 1 | 9,835 | 346 | 3.5 |
2018 – Panel 23 Round 1 | 9,846 | 383 | 3.9 |
2019 – Panel 24 Round 1 | 9,864 | 343 | 3.5 |
2020 – Panel 25 Round 1 | 9,880 | 586 | 5.9 |
2021 – Panel 26 Round 1 | 9,509 | 335 | 3.5 |
Rounds 3/5 | |||
2017 – Panel 20 Round 5/Panel 21 Round 3 | 14,939 | 533 | 3.6 |
2018 – Panel 21 Round 5/Panel 22 Round 3 | 13,922 | 467 | 3.4 |
2019 – Panel 22 Round 5/Panel 23 Round 3 | 13,594 | 486 | 3.6 |
2020 – Panel 23 Round 5/Panel 24 Round 3 | 13,241 | 592 | 4.5 |
2021 – Panel 23 Round 7/Panel 24 Round 5/Panel 25 Round 3 | 15,616 | 555 | 3.6 |
Rounds 2/4 | |||
2017 – Panel 21 Round 4/Panel 22 Round 2 | 14,395 | 518 | 3.6 |
2018 – Panel 22 Round 4/Panel 23 Round 2 | 14,123 | 524 | 3.7 |
2019 – Panel 23 Round 4/Panel 24 Round 2 | 13,844 | 531 | 3.8 |
2020 – Panel 23 Round 6/Panel 24 Round 4/Panel 25 Round 2 | 18,480 | 1,163 | 6.3 |
2021 – Panel 23 Round 8/Panel 24 Round 6/Panel 25 Round 4/Panel 26 Round 2 | 19,339 | 848 | 4.4 |
Table 5-2 shows the number and types of calls received on the respondent hotline during 2020 and 2021. As in prior years, a substantial portion of the Round 1 calls was from refusals, with a much smaller proportion of refusals and a higher proportion of appointment requests in the later rounds.
Reason for call | Spring 2020(Panel 25 Round 1, Panel 24 Round 3, Panel 23 Round 5) | Fall 2020 (Panel 25 Round 2, Panel 24 Round 4, Panel 23, Round 6) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2, 4, and 6 | ||||
N | % | N | % | N | % | |
Address/telephone change | 5 | 0.9 | 37 | 6.3 | 28 | 2.4 |
Appointment | 142 | 24.2 | 332 | 56.1 | 278 | 23.9 |
Request callback | 102 | 17.4 | 121 | 20.4 | 276 | 23.7 |
No message | 22 | 3.8 | 18 | 3.0 | 60 | 5.2 |
Other | 2 | 0.3 | 5 | 0.8 | 5 | 0.4 |
Proxy needed | 6 | 1.0 | 3 | 0.5 | 10 | 0.9 |
Request SAQ help | 0 | 0.0 | 1 | 0.2 | 35 | 3.0 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Special needs | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Refusal | 209 | 35.7 | 62 | 10.5 | 203 | 17.5 |
Willing to participate | 98 | 16.7 | 13 | 2.2 | 266 | 22.9 |
Total | 586 | 592 | 1,163 |
Reason for call | Spring 2021(Panel 26 Round 1, Panel 25 Round 3, Panel 24 Round 5, Panel 23 Round 7) | Fall 2021(Panel 26 Round 2, Panel 25 Round 4, Panel 24 Round 6, Panel 23 Round 8) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3, 5, 7 | Rounds 2, 4, 6, 8 | ||||
N | % | N | % | N | % | |
Address/telephone change | 2 | 0.6 | 19 | 3.4 | 59 | 7.0 |
Appointment | 27 | 8.1 | 76 | 13.7 | 233 | 27.5 |
Request callback | 101 | 30.1 | 240 | 43.2 | 287 | 33.8 |
No message | 34 | 10.1 | 21 | 3.8 | 41 | 4.8 |
Other | 8 | 2.4 | 48 | 8.6 | 8 | 0.9 |
Proxy needed | 0 | 0.0 | 7 | 1.3 | 13 | 1.5 |
Request SAQ help | 3 | 0.9 | 17 | 3.1 | 15 | 1.8 |
SAQ refusal | 0 | 0.0 | 1 | 0.2 | 0 | 0.0 |
Special needs | 0 | 0.0 | 2 | 0.4 | 1 | 0.1 |
Refusal | 87 | 26.0 | 87 | 15.7 | 176 | 20.8 |
Willing to participate | 73 | 21.8 | 37 | 6.7 | 15 | 1.8 |
Total | 335 | 555 | 848 |
Monitoring Production. Home office staff monitored production, cost, and data quality, and they provided reports and feedback to field managers and supervisors for review and follow-up. Each week they generated and distributed reports to AHRQ showing weekly and cumulative figures on field production, response rate, and costs.
Home Office Support. Refusal letters were generated and mailed by home office staff as requested by the field. Home office staff also responded to supply requests from the field, replenishing interviewer and supervisor stocks of materials as needed.
Receipt Control. As interviewers completed cases, they transmitted the data electronically and shipped the case folders containing any hard-copy documents to the home office receipt operation. Interviewers shipped all material containing PII via Federal Express, which facilitates tracking of late or lost shipments. When preparing a shipment to the home office receipt department, interviewers used the Ship to Receipt module to indicate exactly what materials were included in the package and recorded the FedEx tracking number. This information was sent directly to the receipt control system so it was known what materials were expected. For interviews completed by phone due to the COVID-19 pandemic and for which contactless pickup of hard-copy documents could not be arranged, interviewers provided a BRE for the respondent to send their documents directly to the home office. Contents of the cases received at the home office were reviewed and recorded in the receipt system. Authorization forms were edited for completeness and scanned into an image database. When a problem was found in an authorization form, the problem was documented and feedback was sent to the field supervisor to review with the interviewer. All self-administered questionnaires, including SAQs/PSAQs, and DCSs, were receipted and sent out for TeleForm scanning.
Helpdesk Support. The MEPS CAPI Helpdesk continued to provide technical support for field interviewing activities during 2021. Helpdesk staff were available 7 days a week to help field staff resolve CAPI, Field Management System, transmission, laptop, and iPhone problems. Incoming calls were documented for follow-up as needed to resolve individual issues and to identify issues reported by multiple interviewers. The CAPI Helpdesk served as the coordinating point for tracking and shipping all field laptops, monitoring field laptop assignment, and coordinating laptop and phone repairs.
This chapter briefly describes the activities that supported Westat’s data delivery work during the year and identifies the principal files related to data year 2018 delivered in 2021.
Adhering to the schedule for delivery of the key MEPS public use files is of paramount importance to the project. Throughout 2021, data processing activities to support the major file deliveries for the year proceeded simultaneously along several different delivery paths, with activity focused separately on each of the Panels for the annual Full Year Files. As in past years, the project used a set of comprehensive data delivery schedules to guide management of the effort. The schedules integrate key dates for the data collection, data capture, coding, editing and imputation, weights construction, and documentation production tasks. These schedules provide a framework for assessing the potential impact of proposed changes at the start of each processing cycle and for coordinating the succession of processes that comprise the delivery effort.
The data quality control (DQC) system consists of both a consolidated database that preserves data as returned from the field, and a DQC-specific database that shows the current values of data following any required updates. DQC technicians access the data through a secure portal.
Technicians review and edit the data using the Blaise database model that is used in the field for data collection. All DQC work occurs at a “case” level. The DQC system automatically creates a unique “issue” for each instance of text entered as a comment and includes the comment category selected by the field interviewer associated with the text entry. As cases are loaded into DQC, each comment and category is checked by a Natural Language Processing (NLP) algorithm that identifies the most likely category. During processing, data technicians have the opportunity to accept or update this category. Technicians then follow standardized procedures for data review and editing based on the comment category.
The DQC system also runs a series of programmatic checks and assigns a new “issue” for each instance that triggers a consistency or edit check. These checks are designed to ensure that data changed during editing conform fully to the rules of the CAPI instrument before the data are released. During spring 2021, 14.2 percent of cases received from the field included a comment (Table 6-1). Cases with any issue, a field comment, or a consistency check totaled 29.9 percent. For fall 2021, 12.7 percent of cases received from the field included a comment while cases with any issue totaled 25.7 percent.
Field period | Cases processed | Cases with at least 1 comment | % cases with comments | Cases with at least 1 issue | % cases with issues | Not actionable (comments) | % NA comments |
---|---|---|---|---|---|---|---|
Spring 2021 | 19,412 | 2,760 | 14.2 | 5,890 | 29.9 | 2,592 | 54.6 |
Fall 2021 | 16,442 | 2,094 | 12.7 | 4,233 | 25.7 | 1,691 | 49.7 |
Field interviewers must select one of 10 categories for each comment text string; after selecting a category, CAPI provides category-specific guidance on information to include in the comment (e.g., RU member name, event date, etc.). They receive training to help identify the most meaningful category and avoid over-use of the category “Other.” Table 6-2 shows the number of comments made in each category as assigned by the NLP algorithm and confirmed by the data technicians.
Total number of comments by category | # | % |
---|---|---|
1. RU/RU Member | 485 | 5.9 |
2. RU Member Refusal | 197 | 2.4 |
3. Condition | 182 | 2.2 |
4. Health Care Events | 4,410 | 54.1 |
5. Glasses/Contact Lenses | 61 | 0.7 |
6. Other Medical Expenses | 81 | 1.0 |
7. Prescribed Medicines | 864 | 10.6 |
8. Employment | 528 | 6.5 |
9. Health Insurance | 684 | 8.4 |
10. Other | 665 | 8.2 |
Total | 8,157 |
Transformation is the process of extracting data from the Blaise data models optimized for data collection and writing them to the data exchange format (Dex) required by the data delivery teams. The transformation has two logical activities: First is transforming the structure of the data from data collection to Dex and then transforming the format of the data from Blaise to Oracle. The resulting data, now stored in Oracle using the Dex structure, serves as input to the analytic editing, variable construction, public use files (PUFs), and other file deliveries. The goal is to dislocate the delivery activities as little as possible in order to provide data of the highest quality as efficiently as possible.
As shown in Figure 6-1, data transformation has four distinct layers. The metadata layer contains all the variable definitions�including names, tables, or segments or blocks�and transformation logic, sometimes known as plain-language transformation specifications. The analytic group leads at Westat are typically responsible for the metadata and the transformation logic.
Figure 6-1. Blaise to Dex transformation
Based on the metadata, two specifications are developed. The first describes the Dex structure using a formal schema, which is expressed as a set of SQL statements to create the empty Oracle Dex database. The second specification is the detailed transformation specification. Each variable is assigned to a set of similar variables called a transformation class. A unique transformation class is defined by the information needed to specify the transformation. For instance, some variables simply need to be copied to an appropriate location in the Dex. These are known as passthrough variables and belong to the Passthrough class. Code All That Apply variables are transformed based on the value selected by the interviewer, so the specification requires an additional Dex variable for each possible value. Code All That Apply is another transformation class. All of the classes are developed through discussions with AHRQ and are sent to AHRQ for approval.
The third layer is the transformation (or programming) layer. Using the specifications just described, the data are read from the Blaise database in the data collection structure, the transformation logic is applied, and a data file for each Dex table is written. The Dex tables are generally identical to the legacy Cheshire segments, such as BASE, HOME, or PERS. This set of intermediate data files is known as pre-Dex and has the same structure as the Dex database, but all files are in the Blaise format. Next, the format is transformed from the Blaise format to Oracle, writing to the Single- Round Database (SRD). The single-round structure is necessary because the data collection instrument does not contain all data for all rounds for a given case; rather, only the data required to field the case in that specific round are included. The SRD data are then merged into the existing data, yielding a cumulative Multi-Round Database (MRD).
The final layer relates the different databases to selected key deliverables. This layer is intentionally general. For example, while the MRD is the source for the PUF deliveries, there are many additional steps to edit the data, construct variables, and deliver a data file and codebook.
TeleForm, a commercial off-the-shelf (COTS) software system for intelligent data capture and image processing, was used in 2021 to capture data collected in the DCS and the SAQ. TeleForm software reads the form image files and extracts data according to the project specifications. Supporting software checks the data for conformity with project specifications and flags data values that violate the validation rules for review and resolution.
As SAQs evolve to be multimode (web and paper), we will update this section to discuss data harmonization and web data collection.
Coding refers to the process of converting data items collected in text format to pre-specified numeric codes. The plan for the 2021 coding effort (for items collected during the calendar years 2019 and early 2020) was described in Deliverables 20.506, .507, and .508. For the MEPS-HC, five types of information require coding:
Condition and Prescribed Medicine Coding
In 2021, coding was performed on the conditions and prescribed medicine text strings reported by household respondents for calendar year 2020. An automated system enabled coders to easily search for and assign the appropriate ICD-10-CM code (for conditions) or Generic Product Identifier (GPI) code (for medicines). The system supports the verifier’s review of all codes and, as needed, correction of the coder’s initial decision. For the prescribed medicine coding, a pharmacist provided a further review of text strings questioned by the verifier, uncodable text strings, foreign medicines, and compound drugs. All coding actions are tracked in the system and error rates calculated weekly. Both the condition and prescribed medicine coding efforts were staffed by three coders.
During the 2021 coding cycle, coding managers continued to refine a number of new and revised procedures and processes implemented for the coding of 2018 data in 2019. These revisions were a result of many months of collaboration between AHRQ and Westat in evaluating all aspects of the coding processes for household reported conditions, prescribed medicines, and sources of payment, including updating and maintaining the authority tables and the development of tools and resource documents to facilitate the execution of these tasks. Also in 2019, Westat deployed a new web-based coding system for condition and prescribed medicine coding to replace the Access database previously used. The new system better supports downstream-processing activities and aligns with other web-based systems used across other components of MEPS. All aspects of coding work are supported by a number of scheduled quality control checks before, during, and after each coding cycle.
In 2021, medical conditions were coded to include the greatest specificity indicated by the text string. The fully specified ICD-10 code is needed to accurately match to the CCS. A total of 2,750 unique strings were manually coded and the authority table was constructed with AHRQ-approved code assignments. This represented a 73 percent reduction in the number of strings needing manual review due to the implementation of a condition pick list and search tool integrated into the CAPI instrument. The overall error rate for coders was 1 percent, below the contractual error rate goal of 2 percent.
Prescription medicine text strings for data year 2021 were coded to the set of GPI codes, associated with the Master Drug Data Base (MDDB) maintained by Medi-Span, a part of Wolters Kluwer. The codes characterize medicines by therapeutic class, form, and dosage. To augment the assignment of codes to less specified and ambiguous text strings, AHRQ developed procedures for assigning partial GPI codes and higher-level drug categories that were implemented in 2017 and continued through subsequent coding cycles. AHRQ also developed a set of exact and inexact matching programs to reduce the number of prescribed medicine strings sent for manual coding. Westat’s implementation of these matching programs reduces the number of prescribed medicine text strings sent for manual coding by approximately 40 percent each year. The matching programs are reviewed and approved each year. A total of 9,413 strings were manually coded from 2021 data. In a process similar to condition text strings, the prescription medicine text strings undergo two rounds of unduplication to identify the unique strings to be coded. AHRQ’s exact and inexact matching programs are then run to further reduce the number of strings to be coded. The overall coding error rate (across all coders) was 1 percent, 1 percent lower than the contractual goal of 2 percent. As with conditions, all prescription text strings/codes were reviewed by a verifier, with additional review of selected strings provided by a pharmacist.
Source of Payment Coding
Source of payment information (SOP) is collected in both the household and the medical provider components. In the HC charge payment section of the CAPI instrument, the names of the sources of payment are collected in three places: when the bill was paid by a source identified in response to a direct question about payment (REIMNAM); when the bill was sent to a source other than the respondent and the respondent names that source (WHOBILL#); and in response to a question about a direct payment source for prescription medicines (SRCNAME). The responses are coded to one of the sources of payment options in which healthcare expenditures are reported in the MEPS PUFs. These payment sources include:
The SOP Coding Guidelines is a manual updated each year before the start of the annual coding cycle, submitted for AHRQ approval, and distributed to the coders. Health insurance show cards and data from the health insurance planfill file for CAPI are available to coders as resource materials. Since the Medical Provider Component (MPC) of MEPS uses the same set of source of payment codes as the Household Component, coding rules and decisions are coordinated with the MPC contractor (RTI) to ensure consistency in the coding. Before the start of the coding cycle, Westat compares RTI’s authority tables with its own to identify any inconsistencies. AHRQ adjudicates these to ensure the authority tables from each contractor are aligned.
Each year, the source of payment text strings extracted from the reference year data is matched to a historical file of previously coded SOP text strings to create a file of matched strings with suggested or “matched” codes. These match-coded strings are reviewed by coders and verified or modified as needed. This review is required because insurance companies change their product lines and coverage offerings very frequently, and as a result, the source of payment code for a given text string (e.g., the name of an insurance company or plan) can change from year to year. For example, from one year to the next an insurer or insurance product may participate in or drop out of state exchanges; may offer Medicare Part D or dental or vision insurance or may drop it; may add Medicare Advantage plans in addition to Medicaid HMOs; or may gain or lose state contracts as Medicaid service providers. As a result of these changes, the appropriate code for a company or specific plan may also change from year to year. Strings that do not match to a string in the history table are researched and have an appropriate SOP code assigned by coding staff.
SOP coding during 2021 was for the payment sources reported for 2020 events. For cases when the bill was paid by a source identified in response to a direct question about payment (REIMNAM), a total of 756 previously coded sources of payment text strings were reviewed and updated as needed. After unduplication of the strings reported for 2020, coders reviewed and coded 2,064 strings. If the bill was sent to a source other than the respondent and the respondent names that source (WHOBILL#), coders reviewed and coded 3,145 strings. For text strings reported as direct payers for prescription medicine (SRCNAME), 506 new text strings were reviewed and coded by coders.
Industry and Occupation Coding
Industry and Occupation coding is performed for MEPS by the Census Bureau using the Census Bureau’s Demographic Surveys Division’s (DSD’s) computer-assisted industry and occupation (I&O) codes, which can be cross-walked to the 2007 North American Industrial Classification (NAIC) coding system, and the 2010 Standard Occupational Classifications (SOC). The codes characterize the jobs reported by household respondents and are released annually on the FY JOBS file. During 2021, 12,756 jobs were coded for the 2020 JOBS file.
During the 2021 coding cycle, AHRQ again expanded the scope of work to include coding data year 2020 text strings to multiple versions of the NAICS and SOCS; specifically, the data runs included 2007 NAICS and 2000 SOCS, 2012 NAICS and 2010 SOCS, and 2017 NAICS and 2018 SOCS.
Additionally, AHRQ requested that Westat have the 17,605 text strings for data year 2009 coded to:
This was a one-time request.
GEO Coding
The Westat Geographic Information Systems (GIS) division GEO-codes household addresses, assigning the latitude and longitude coordinates, as well as other variables such as county and state Federal Information Processing Standards (FIPS) codes, Metropolitan Statistical Area (MSA) status, Designated Market Area, Census Place, and county. RU-level data are expanded to the person level and delivered to AHRQ as part of the set of “Master Files” sent yearly. These data are not included in a PUF, but some variables are used for the FY weights processing.
During the calendar year 2021 coding cycle, 20,258 unique address records for full-year reporting units were processed.
The primary objective of MEPS is to produce a series of data files for public release each calendar year. The inter-round processing, editing, and variable construction tasks all serve to prepare these PUFs. Each file addresses one or more aspects of the U.S. civilian non-institutional population’s access to, use of, and payments for healthcare.
The Oracle system has a separate database for each data year. This is a recent departure from having individual databases for each Panel/year combination. The goal of this is to make data processing more streamlined, and this was necessitated by the extension Panels 23 and 24 to collect data through nine rounds.
Due to the pandemic, Panels 23 and 24 are being extended through Round 9. The MEPS 2020 database contains Panels 23 through 25, and the MEPS 2021 database contains Panels 23 through 26. The remainder of this section focuses on the 2021 database.
After the data are in the Oracle delivery database, each analytical team performs basic edit checks on the data to begin the process. These edits ensure the data conform to the CAPI instrument’s flow as well as to AHRQ’s analytical needs. These edits can be run in SAS, using SAS datasets extracted from the delivery database, or in SQL directly on the delivery database. Problems identified through the basic edits process may require updates to the data. If updating is required, these updates may be accomplished in one of two ways:
Once all the edits have been completed for an analytical team, and QC frequencies and univariates have been approved, notification is sent to all other analytical teams so that work can be coordinated in those areas.
Analytical groups at AHRQ work with Westat analysts to define the variables of interest for inclusion on the PUF and other key data deliveries. Variables are named according to standard naming conventions, and once the list is approved, descriptive specifications are written to define each variable and to provide detailed information for programming.
Specifications are written at two levels. The high-level specification is a descriptive specification intended to document the concept of the variable and provide high-level information regarding the variable construction requirements. The detailed-level specifications contain the details required to develop programming code for building the variables. Specifications are written and sent to AHRQ for approval. Once approval is received for the specification, program development can proceed for that variable.
Specifications guide programming development, and once programs have been written, code reviews compare newly developed code against specifications to identify problems in either code or specifications. This program development process includes a number of steps and checkpoints to ensure that all new programs meet all specification requirements:
This model is followed for the development of all new programs required for data delivery. For mature programs that are re-used in subsequent deliveries with only minor modifications, the process is appropriately streamlined to ensure both accuracy and efficiency on all programs.
Public Use File Deliveries
The principal files delivered during calendar year 2021 are listed below:
Ancillary File Deliveries
In addition to the principal data files delivered for public release each year, the project also produces a number of ancillary files for delivery to AHRQ. These include an extensive series of person- and family-level weights, “raw” data files reflecting MEPS data at intermediate stages of capture and editing, and files generated at the end of each round or as needed to support analysis of both substantive and methodological topics. A comprehensive list of the files delivered during 2021 appears in the Appendix.
Medical Provider Component (MPC) Files
During each year’s processing cycle, Westat also creates files for the MPC contractor and, in turn, receives data files back from the MPC. As in prior years, Westat provided sample files for the MPC in three waves, with the first two waves delivered while HC data collection was still in progress. In preparing the sample files to be delivered in 2021 for MPC collection of data about 2020 health events, Westat again applied the program developed in 2014 for de-duplicating the sample of providers. This process, developed in consultation with AHRQ, was designed to reduce the number of duplicate providers reported from the household data collection.
Early in 2021, following completion of MPC data collection and processing for 2020 events, Westat received the files containing data collected in the MPC with linkages to matching events collected in the MPC with events collected in the HC. In processing at Westat, matched events from the MPC served as the primary source for imputing expenditure variables for the 2019 events. A similar file of prescribed medicines was also delivered to support matching and imputation of expenditures for the prescribed medicines at AHRQ. Timely and well-coordinated data handoffs between Westat and the MPC are critical to the timely delivery of the full year expenditure files. With each additional year of interaction and cooperation, the handoffs between the MPC and HC have gone more and more smoothly.
Data collection period | RU-level sample size* |
---|---|
January-June 1996 | 10,799 |
Panel 1 Round 1 | 10,799 |
July-December 1996 | 9,485 |
Panel 1 Round 2 | 9,485 |
January-June 1997 | 15,689 |
Panel 1 Round 3 | 9,228 |
Panel 2 Round 1 | 6,461 |
July-December 1997 | 14,657 |
Panel 1 Round 4 | 9,019 |
Panel 2 Round 2 | 5,638 |
January-June 1998 | 19,269 |
Panel 1 Round 5 | 8,477 |
Panel 2 Round 3 | 5,382 |
Panel 3 Round 1 | 5,410 |
July-December 1998 | 9,871 |
Panel 2 Round 4 | 5,290 |
Panel 3 Round 2 | 4,581 |
January-June 1999 | 17,612 |
Panel 2 Round 5 | 5,127 |
Panel 3 Round 3 | 5,382 |
Panel 4 Round 1 | 7,103 |
July-December 1999 | 10,161 |
Panel 3 Round 4 | 4,243 |
Panel 4 Round 2 | 5,918 |
January-June 2000 | 15,447 |
Panel 3 Round 5 | 4,183 |
Panel 4 Round 3 | 5,731 |
Panel 5 Round 1 | 5,533 |
July-December 2000 | 10,222 |
Panel 4 Round 4 | 5,567 |
Panel 5 Round 2 | 4,655 |
January-June 2001 | 21,069 |
Panel 4 Round 5 | 5,547 |
Panel 5 Round 3 | 4,496 |
Panel 6 Round 1 | 11,026 |
July-December 2001 | 13,777 |
Panel 5 Round 4 | 4,426 |
Panel 6 Round 2 | 9,351 |
January-June 2002 | 21,915 |
Panel 5 Round 5 | 4,393 |
Panel 6 Round 3 | 9,183 |
Panel 7 Round 1 | 8,339 |
July-December 2002 | 15,968 |
Panel 6 Round 4 | 8,977 |
Panel 7 Round 2 | 6,991 |
January-June 2003 | 24,315 |
Panel 6 Round 5 | 8,830 |
Panel 7 Round 3 | 6,779 |
Panel 8 Round 1 | 8,706 |
July-December 2003 | 13,814 |
Panel 7, Round 4 | 6,655 |
Panel 8, Round 2 | 7,159 |
January-June 2004 | 22,552 |
Panel 7 Round 5 | 6,578 |
Panel 8 Round 3 | 7,035 |
Panel 9 Round 1 | 8,939 |
July-December 2004 | 14,068 |
Panel 8, Round 4 | 6,878 |
Panel 9, Round 2 | 7,190 |
January-June 2005 | 22,548 |
Panel 8 Round 5 | 6,795 |
Panel 9 Round 3 | 7,005 |
Panel 10 Round 1 | 8,748 |
July-December 2005 | 13,991 |
Panel 9, Round 4 | 6,843 |
Panel 10, Round 2 | 7,148 |
January-June 2006 | 23,278 |
Panel 9 Round 5 | 6,703 |
Panel 10 Round 3 | 6,921 |
Panel 11 Round 1 | 9,654 |
July-December 2006 | 14,280 |
Panel 10 Round 4 | 6,708 |
Panel 11 Round 2 | 7,572 |
January-June 2007 | 21,326 |
Panel 10 Round 5 | 6,596 |
Panel 11 Round 3 | 7,263 |
Panel 12 Round 1 | 7,467 |
July-December 2007 | 12,906 |
Panel 11 Round 4 | 7,005 |
Panel 12 Round 2 | 5,901 |
January-June 2008 | 22,414 |
Panel 11 Round 5 | 6,895 |
Panel 12 Round 3 | 5,580 |
Panel 13 Round 1 | 9,939 |
July-December 2008 | 13,384 |
Panel 12 Round 4 | 5,376 |
Panel 13 Round 2 | 8,008 |
January-June 2009 | 22,960 |
Panel 12 Round 5 | 5,261 |
Panel 13 Round 3 | 7,800 |
Panel 14 Round 1 | 9,899 |
July-December 2009 | 15,339 |
Panel 13 Round 4 | 7,670 |
Panel 14 Round 2 | 7,669 |
January-June 2010 | 23,770 |
Panel 13 Round 5 | 7,576 |
Panel 14 Round 3 | 7,226 |
Panel 15 Round 1 | 8,968 |
July-December 2010 | 13,785 |
Panel 14 Round 4 | 6,974 |
Panel 15 Round 2 | 6,811 |
January-June 2011 | 23,693 |
Panel 14 Round 5 | 6,845 |
Panel 15 Round 3 | 6,431 |
Panel 16 Round 1 | 10,417 |
July-December 2011 | 14,802 |
Panel 15 Round 4 | 6,254 |
Panel 16 Round 2 | 8,548 |
January-June 2012 | 24,247 |
Panel 15 Round 5 | 6,156 |
Panel 16 Round 3 | 8,160 |
Panel 17 Round 1 | 9,931 |
July-December 2012 | 16,161 |
Panel 16 Round 4 | 8,048 |
Panel 17 Round 2 | 8,113 |
January-June 2013 | 25,788 |
Panel 16 Round 5 | 7,969 |
Panel 17 Round 3 | 7,869 |
Panel 18 Round 1 | 9,950 |
July-December 2013 | 15,347 |
Panel 17 Round 4 | 7,656 |
Panel 18 Round 2 | 7,691 |
January-June 2014 | 24,857 |
Panel 17 Round 5 | 7,485 |
Panel 18 Round 3 | 7,402 |
Panel 19 Round 1 | 9,970 |
July-December 2014 | 14,665 |
Panel 18 Round 4 | 7,203 |
Panel 19 Round 2 | 7,462 |
January-June 2015 | 25,185 |
Panel 18 Round 5 | 7,163 |
Panel 19 Round 3 | 7,168 |
Panel 20 Round 1 | 10,854 |
July-December 2015 | 15,247 |
Panel 19 Round 4 | 6,946 |
Panel 20 Round 2 | 8,301 |
January-June 2016 | 24,694 |
Panel 19 Round 5 | 6,856 |
Panel 20 Round 3 | 7,987 |
Panel 21 Round 1 | 9,851 |
July-December 2016 | 15,390 |
Panel 20 Round 4 | 7,729 |
Panel 21 Round 2 | 7,661 |
January-June 2017 | 24,774 |
Panel 20 Round 5 | 7,611 |
Panel 21 Round 3 | 7,327 |
Panel 22 Round 1 | 9,835 |
July-December 2017 | 14,396 |
Panel 21 Round 4 | 7,025 |
Panel 22 Round 2 | 7,371 |
January-June 2018 | 23,768 |
Panel 21 Round 5 | 6,899 |
Panel 22 Round 3 | 7,023 |
Panel 23 Round 1 | 9,846 |
July-December 2018 | 14,123 |
Panel 22 Round 4 | 6,788 |
Panel 23 Round 2 | 7,335 |
January-June 2019 | 23,458 |
Panel 22 Round 5 | 6,653 |
Panel 23 Round 3 | 6,941 |
Panel 24 Round 1 | 9,864 |
July-December 2019 | 13,847 |
Panel 23 Round 4 | 6,679 |
Panel 24 Round 2 | 7,168 |
January-June 2020 | 23,122 |
Panel 23 Round 5 | 6,488 |
Panel 24 Round 3 | 6,753 |
Panel 25 Round 1 | 9,881 |
July-December 2020 | 18,480 |
Panel 23 Round 6 | 6,373 |
Panel 24 Round 4 | 6,278 |
Panel 25 Round 2 | 5,829 |
January-June 2021 | 25,126 |
Panel 23 Round 7 | 5,096 |
Panel 24 Round 5 | 5,426 |
Panel 25 Round 3 | 5,094 |
Panel 26 Round 1 | 9,510 |
July-December 2021 | 19,340 |
Panel 23 Round 8 | 4,492 |
Panel 24 Round 6 | 4,753 |
Panel 25 Round 4 | 4,222 |
Panel 26 Round 2 | 5,873 |
* RU-level sample size for this table derived from field management system counts and operational reports detailing fielded sample.
Panel/round | Original sample | Split cases (movers) | Student cases | Out-of-scope cases | Net sample | Completes | Average interviewer hours/ complete | Response rate (%) | |
---|---|---|---|---|---|---|---|---|---|
Panel 1 | Round 1 | 10,799 | 675 | 125 | 165 | 11,434 | 9,496 | 10.4 | 83.1 |
Round 2 | 9,485 | 310 | 74 | 101 | 9,768 | 9,239 | 8.7 | 94.6 | |
Round 3 | 9,228 | 250 | 28 | 78 | 9,428 | 9,031 | 8.6 | 95.8 | |
Round 4 | 9,019 | 261 | 33 | 89 | 9,224 | 8,487 | 8.5 | 92.0 | |
Round 5 | 8,477 | 80 | 5 | 66 | 8,496 | 8,369 | 6.5 | 98.5 | |
Panel 2 | Round 1 | 6,461 | 431 | 71 | 151 | 6,812 | 5,660 | 12.9 | 83.1 |
Round 2 | 5,638 | 204 | 27 | 54 | 5,815 | 5,395 | 9.1 | 92.8 | |
Round 3 | 5,382 | 166 | 15 | 52 | 5,511 | 5,296 | 8.5 | 96.1 | |
Round 4 | 5,290 | 105 | 27 | 65 | 5,357 | 5,129 | 8.3 | 95.7 | |
Round 5 | 5,127 | 38 | 2 | 56 | 5,111 | 5,049 | 6.7 | 98.8 | |
Panel 3 | Round 1 | 5,410 | 349 | 44 | 200 | 5,603 | 4,599 | 12.7 | 82.1 |
Round 2 | 4,581 | 106 | 25 | 39 | 4,673 | 4,388 | 8.3 | 93.9 | |
Round 3 | 4,382 | 102 | 4 | 42 | 4,446 | 4,249 | 7.3 | 95.5 | |
Round 4 | 4,243 | 86 | 17 | 33 | 4,313 | 4,184 | 6.7 | 97.0 | |
Round 5 | 4,183 | 23 | 1 | 26 | 4,181 | 4,114 | 5.6 | 98.4 | |
Panel 4 | Round 1 | 7,103 | 371 | 64 | 134 | 7,404 | 5,948 | 10.9 | 80.3 |
Round 2 | 5,918 | 197 | 47 | 40 | 6,122 | 5,737 | 7.2 | 93.7 | |
Round 3 | 5,731 | 145 | 10 | 39 | 5,847 | 5,574 | 6.9 | 95.3 | |
Round 4 | 5,567 | 133 | 35 | 39 | 5,696 | 5,540 | 6.8 | 97.3 | |
Round 5 | 5,547 | 52 | 4 | 47 | 5,556 | 5,500 | 6.0 | 99.0 | |
Panel 5 | Round 1 | 5,533 | 258 | 62 | 103 | 5,750 | 4,670 | 11.1 | 81.2 |
Round 2 | 4,655 | 119 | 27 | 27 | 4,774 | 4,510 | 7.7 | 94.5 | |
Round 3 | 4,496 | 108 | 17 | 24 | 4,597 | 4,437 | 7.2 | 96.5 | |
Round 4 | 4,426 | 117 | 20 | 41 | 4,522 | 4,396 | 7.0 | 97.2 | |
Round 5 | 4,393 | 47 | 12 | 32 | 4,420 | 4,357 | 5.5 | 98.6 | |
Panel 6 | Round 1 | 11,026 | 595 | 135 | 200 | 11,556 | 9,382 | 10.8 | 81.2 |
Round 2 | 9,351 | 316 | 49 | 50 | 9,666 | 9,222 | 7.2 | 95.4 | |
Round 3 | 9,183 | 215 | 23 | 41 | 9,380 | 9,001 | 6.5 | 96.0 | |
Round 4 | 8,977 | 174 | 32 | 66 | 9,117 | 8,843 | 6.6 | 97.0 | |
Round 5 | 8,830 | 94 | 14 | 46 | 8,892 | 8,781 | 5.6 | 98.8 | |
Panel 7 | Round 1 | 8,339 | 417 | 76 | 122 | 8,710 | 7,008 | 10.0 | 80.5 |
Round 2 | 6,991 | 190 | 40 | 24 | 7,197 | 6,802 | 7.2 | 94.5 | |
Round 3 | 6,779 | 169 | 21 | 32 | 6,937 | 6,673 | 6.5 | 96.2 | |
Round 4 | 6,655 | 133 | 17 | 34 | 6,771 | 6,593 | 7.0 | 97.4 | |
Round 5 | 6,578 | 79 | 11 | 39 | 6,629 | 6,529 | 5.7 | 98.5 | |
Panel 8 | Round 1 | 8,706 | 441 | 73 | 175 | 9,045 | 7,177 | 10.0 | 79.3 |
Round 2 | 7,159 | 218 | 52 | 36 | 7,393 | 7,049 | 7.2 | 95.4 | |
Round 3 | 7,035 | 150 | 13 | 33 | 7,165 | 6,892 | 6.5 | 96.2 | |
Round 4 | 6,878 | 149 | 27 | 53 | 7,001 | 6,799 | 7.3 | 97.1 | |
Round 5 | 6,795 | 71 | 8 | 41 | 6,833 | 6,726 | 6.0 | 98.4 | |
Panel 9 | Round 1 | 8,939 | 417 | 73 | 179 | 9,250 | 7,205 | 10.5 | 77.9 |
Round 2 | 7,190 | 237 | 40 | 40 | 7,427 | 7,027 | 7.7 | 94.6 | |
Round 3 | 7,005 | 189 | 24 | 31 | 7,187 | 6,861 | 7.1 | 95.5 | |
Round 4 | 6,843 | 142 | 23 | 44 | 6,964 | 6,716 | 7.4 | 96.5 | |
Round 5 | 6,703 | 60 | 8 | 43 | 6,728 | 6,627 | 6.1 | 98.5 | |
Panel 10 | Round 1 | 8,748 | 430 | 77 | 169 | 9,086 | 7,175 | 11.0 | 79.0 |
Round 2 | 7,148 | 219 | 36 | 22 | 7,381 | 6,940 | 7.8 | 94.0 | |
Round 3 | 6,921 | 156 | 10 | 31 | 7,056 | 6,727 | 6.8 | 95.3 | |
Round 4 | 6,708 | 155 | 13 | 34 | 6,842 | 6,590 | 7.3 | 96.3 | |
Round 5 | 6,596 | 55 | 9 | 38 | 6,622 | 6,461 | 6.2 | 97.6 | |
Panel 11 | Round 1 | 9,654 | 399 | 81 | 162 | 9,972 | 7,585 | 11.5 | 76.1 |
Round 2 | 7,572 | 244 | 42 | 24 | 7,834 | 7,276 | 7.8 | 92.9 | |
Round 3 | 7,263 | 170 | 15 | 25 | 7,423 | 7,007 | 6.9 | 94.4 | |
Round 4 | 7,005 | 139 | 14 | 36 | 7,122 | 6,898 | 7.2 | 96.9 | |
Round 5 | 6,895 | 51 | 7 | 44 | 6,905 | 6,781 | 5.5 | 98.2 | |
Panel 12 | Round 1 | 7,467 | 331 | 86 | 172 | 7,712 | 5,901 | 14.2 | 76.5 |
Round 2 | 5,901 | 157 | 27 | 27 | 6,058 | 5,584 | 9.1 | 92.2 | |
Round 3 | 5,580 | 105 | 13 | 12 | 5,686 | 5,383 | 8.1 | 94.7 | |
Round 4 | 5,376 | 102 | 12 | 16 | 5,474 | 5,267 | 8.8 | 96.2 | |
Round 5 | 5,261 | 50 | 8 | 21 | 5,298 | 5,182 | 6.4 | 97.8 | |
Panel 13 | Round 1 | 9,939 | 502 | 97 | 213 | 10,325 | 8,017 | 12.2 | 77.6 |
Round 2 | 8,008 | 220 | 47 | 23 | 8,252 | 7,809 | 9.0 | 94.6 | |
Round 3 | 7,802 | 204 | 14 | 38 | 7,982 | 7,684 | 7.2 | 96.2 | |
Round 4 | 7,670 | 162 | 17 | 40 | 7,809 | 7,576 | 7.5 | 97.0 | |
Round 5 | 7,576 | 70 | 15 | 38 | 7,623 | 7,461 | 6.1 | 97.9 | |
Panel 14 | Round 1 | 9,899 | 394 | 74 | 140 | 10,227 | 7,650 | 12.3 | 74.8 |
Round 2 | 7,669 | 212 | 29 | 27 | 7,883 | 7,239 | 8.3 | 91.8 | |
Round 3 | 7,226 | 144 | 23 | 34 | 7,359 | 6,980 | 7.3 | 94.9 | |
Round 4 | 6,974 | 112 | 23 | 30 | 7,079 | 6,853 | 7.7 | 96.8 | |
Round 5 | 6,845 | 55 | 9 | 30 | 6,879 | 6,761 | 6.2 | 98.3 | |
Panel 15 | Round 1 | 8,968 | 374 | 73 | 157 | 9,258 | 6,802 | 13.2 | 73.5 |
Round 2 | 6,811 | 171 | 19 | 21 | 6,980 | 6,435 | 8.9 | 92.2 | |
Round 3 | 6,431 | 134 | 23 | 22 | 6,566 | 6,261 | 7.2 | 95.4 | |
Round 4 | 6,254 | 116 | 15 | 26 | 6,359 | 6,165 | 7.8 | 97.0 | |
Round 5 | 6,156 | 50 | 5 | 19 | 6,192 | 6,078 | 6.0 | 98.2 | |
Panel 16 | Round 1 | 10,417 | 504 | 98 | 555 | 10,940 | 8,553 | 11.4 | 78.2 |
Round 2 | 8,353 | 248 | 40 | 32 | 8,821 | 8,351 | 7.6 | 94.7 | |
Round 3 | 8,160 | 223 | 19 | 27 | 8,375 | 8,236 | 6.4 | 96.1 | |
Round 4 | 8,048 | 151 | 16 | 13 | 8,390 | 8,162 | 6.6 | 97.3 | |
Round 5 | 7,969 | 66 | 13 | 25 | 8,198 | 7,998 | 5.5 | 97.6 | |
Panel 17 | Round 1 | 9,931 | 490 | 92 | 127 | 10,386 | 8,121 | 11.7 | 78.2 |
Round 2 | 8,113 | 230 | 35 | 19 | 8,359 | 7,874 | 7.9 | 94.2 | |
Round 3 | 7,869 | 180 | 15 | 15 | 8,049 | 7,663 | 6.3 | 95.2 | |
Round 4 | 7,656 | 199 | 19 | 30 | 7,844 | 7,494 | 7.4 | 95.5 | |
Round 5 | 7,485 | 87 | 10 | 23 | 7,559 | 7,445 | 6.1 | 98.5 | |
Panel 18 | Round 1 | 9,950 | 435 | 83 | 111 | 10,357 | 7,683 | 12.3 | 74.2 |
Round 2 | 7,691 | 264 | 32 | 16 | 7,971 | 7,402 | 9.2 | 92.9 | |
Round 3 | 7,402 | 235 | 21 | 22 | 7,635 | 7,213 | 7.6 | 94.5 | |
Round 4 | 7,203 | 189 | 14 | 22 | 7,384 | 7,172 | 7.5 | 97.1 | |
Round 5 | 7,163 | 94 | 12 | 15 | 7,254 | 7,138 | 6.2 | 98.4 | |
Panel 19 | Round 1 | 9,970 | 492 | 70 | 115 | 10,417 | 7,475 | 13.5 | 71.8 |
Round 2 | 7,460 | 222 | 23 | 24 | 7,681 | 7,188 | 8.4 | 93.6 | |
Round 3 | 7,168 | 187 | 12 | 17 | 7,350 | 6,962 | 7.0 | 94.7 | |
Round 4 | 6,946 | 146 | 20 | 23 | 7,089 | 6,858 | 7.4 | 96.7 | |
Round 5 | 6,856 | 75 | 7 | 24 | 6,914 | 6,794 | 5.9 | 98.3 | |
Panel 20 | Round 1 | 10,854 | 496 | 85 | 117 | 11,318 | 8,318 | 12.5 | 73.5 |
Round 2 | 8,301 | 243 | 39 | 22 | 8,561 | 7,998 | 8.3 | 93.4 | |
Round 3 | 7,987 | 173 | 17 | 26 | 8,151 | 7,753 | 6.8 | 95.1 | |
Round 4 | 7,729 | 161 | 19 | 31 | 7,878 | 7,622 | 7.2 | 96.8 | |
Round 5 | 7,611 | 99 | 13 | 23 | 7,700 | 7,421 | 6.0 | 96.4 | |
Panel 21 | Round 1 | 9,851 | 462 | 92 | 89 | 10,316 | 7,674 | 12.6 | 74.4 |
Round 2 | 7,661 | 207 | 32 | 17 | 7,883 | 7,327 | 8.5 | 93.0 | |
Round 3 | 7,327 | 166 | 14 | 19 | 7,488 | 7,043 | 7.2 | 94.1 | |
Round 4 | 7,025 | 119 | 14 | 20 | 7,138 | 6,907 | 7.0 | 96.8 | |
Round 5 | 6,914 | 42 | 8 | 34 | 6,930 | 6,778 | 5.9 | 97.8 | |
Panel 22 | Round 1 | 9,835 | 352 | 68 | 86 | 10,169 | 7,381 | 12.8 | 72.6 |
Round 2 | 7,371 | 166 | 19 | 11 | 7,545 | 7,039 | 8.5 | 93.3 | |
Round 3 | 7,071 | 100 | 12 | 19 | 7,164 | 6,808 | 6.7 | 95.0 | |
Round 4 | 6,815 | 91 | 13 | 18 | 6,901 | 6,672 | 6.8 | 96.7 | |
Round 5 | 6,670 | 35 | 7 | 12 | 6,700 | 6,584 | 5.3 | 98.3 | |
Panel 23 | Round 1 | 9,960 | 1,931 | 46 | 110 | 10,089 | 7,351 | 12.5 | 72.9 |
Round 2 | 7,387 | 106 | 14 | 15 | 7,492 | 6,960 | 8.2 | 92.9 | |
Round 3 | 6,987 | 102 | 11 | 18 | 7,082 | 6,703 | 6.1 | 94.6 | |
Round 4 | 6,704 | 74 | 10 | 12 | 6,776 | 6,522 | 6.6 | 96.2 | |
Round 5 | 6,503 | 34 | 4 | 5 | 6,536 | 6,383 | 5.3 | 97.7 | |
Round 6 | 6,498 | 90 | 10 | 18 | 6,480 | 5,120 | 4.8 | 79.0 | |
Round 7 | 5,176 | 36 | 5 | 6 | 5,170 | 4,513 | 5.2 | 87.3 | |
Round 8 | 4,558 | 27 | 3 | 10 | 4,548 | 3,984 | 5.8 | 87.6 | |
Panel 24 | Round 1 | 9,976 | 153 | 43 | 82 | 10,090 | 7,186 | 11.8 | 71.2 |
Round 2 | 7,211 | 98 | 19 | 5 | 7,323 | 6,777 | 7.9 | 92.5 | |
Round 3 | 6,812 | 76 | 9 | 7 | 6,890 | 6,289 | 6.0 | 91.3 | |
Round 4 | 6,335 | 44 | 4 | 13 | 6,370 | 5,446 | 5.1 | 85.5 | |
Round 5 | 5,510 | 31 | 4 | 15 | 5,495 | 4,770 | 5.3 | 86.8 | |
Round 6 | 4,816 | 22 | 8 | 8 | 4,808 | 3,959 | 5.7 | 82.3 | |
Panel 25 | Round 1 | 10,008 | 184 | 38 | 78 | 10,152 | 6,265 | 10.8 | 61.7 |
Round 2 | 5,907 | 49 | 14 | 12 | 5,958 | 4,677 | 5.5 | 78.5 | |
Round 3 | 5,191 | 38 | 5 | 2 | 5,189 | 4,230 | 6.1 | 81.5 | |
Round 4 | 4,314 | 40 | 10 | 7 | 4,307 | 3,685 | 7.3 | 85.6 | |
Round 5 | |||||||||
Panel 26 | Round 1 | 9,674 | 160 | 29 | 68 | 9,795 | 5,882 | 11.1 | 60.1 |
Round 2 | 6,047 | 83 | 11 | 2 | 6,045 | 4,799 | 9.0 | 79.4 | |
Round 3 | |||||||||
Round 4 | |||||||||
Round 5 |
* Figures in the table are weighted to reflect results of the interim nonresponse subsampling procedure implemented in the first round of Panel 16.
Round 1 | Round 2 | Round 3 | Round 4 | Round 5 | Round 6 | Round 7 | Round 8 | |
---|---|---|---|---|---|---|---|---|
2010 | ||||||||
Panel 15 | 73.5 | 92.2 | ||||||
Panel 14 | 94.9 | 96.8 | ||||||
Panel 13 | 97.9 | |||||||
2011 | ||||||||
Panel 16 | 78.2 | 94.8 | ||||||
Panel 15 | 95.4 | 97.0 | ||||||
Panel 14 | 98.3 | |||||||
2012 | ||||||||
Panel 17 | 78.2 | 94.2 | ||||||
Panel 16 | 96.1 | 97.3 | ||||||
Panel 15 | 98.2 | |||||||
2013 | ||||||||
Panel 18 | 74.2 | 92.9 | ||||||
Panel 17 | 95.2 | 95.5 | ||||||
Panel 16 | 97.6 | |||||||
2014 | ||||||||
Panel 19 | 71.8 | 93.6 | ||||||
Panel 18 | 94.5 | 97.1 | ||||||
Panel 17 | 98.5 | |||||||
2015 | ||||||||
Panel 20 | 73.5 | 93.4 | ||||||
Panel 19 | 94.7 | 96.7 | ||||||
Panel 18 | 98.4 | |||||||
2016 | ||||||||
Panel 21 | 74.4 | 93.0 | ||||||
Panel 20 | 95.1 | 96.8 | ||||||
Panel 19 | 98.3 | |||||||
2017 | ||||||||
Panel 22 | 72.6 | 93.3 | ||||||
Panel 21 | 94.1 | 96.8 | ||||||
Panel 20 | 96.4 | |||||||
2018 | ||||||||
Panel 23 | 72.9 | 92.9 | ||||||
Panel 22 | 95.0 | 96.7 | ||||||
Panel 21 | 97.8 | |||||||
2019 | ||||||||
Panel 24 | 71.2 | 92.5 | ||||||
Panel 23 | 94.6 | 96.2 | ||||||
Panel 22 | 98.3 | |||||||
2020 | ||||||||
Panel 25 | 61.7 | 78.5 | ||||||
Panel 24 | 91.3 | 85.5 | ||||||
Panel 23 | 97.7 | 79.0 | ||||||
2021 | ||||||||
Panel 26 | 60.1 | 79.4 | ||||||
Panel 25 | 81.5 | 85.6 | ||||||
Panel 24 | 86.8 | 82.3 | ||||||
Panel 23 | 87.3 | 87.6 |
2013 P18R1 |
2014 P19R1 |
2015 P20R1 |
2016 P21R1 |
2017 P22R1 |
2018 P23R1 |
2019 P24R1 |
2020 P25R1 |
2021 P26R1 |
|
---|---|---|---|---|---|---|---|---|---|
Total Sample | 10,468 | 10,532 | 11,435 | 10,405 | 10,255 | 10,199 | 10,172 | 10,230 | 9,863 |
Out of scope (%) | 1.1 | 1.1 | 1.0 | 0.9 | 0.8 | 1.1 | 0.8 | 0.8 | 0.7 |
Complete (%) | 74.2 | 71.8 | 73.5 | 74.4 | 72.6 | 72.1 | 70.6 | 61.2 | 59.6 |
Nonresponse (%) | 25.8 | 28.2 | 26.5 | 25.6 | 27.4 | 26.9 | 28.6 | 38.0 | 39.7 |
Refusal (%) | 20.1 | 22.4 | 21.0 | 20.2 | 21.8 | 22.1 | 24.0 | 28.7 | 31.2 |
Not located (%) | 4.3 | 4.2 | 4.3 | 3.7 | 3.9 | 3.1 | 3.1 | 3.2 | 4.3 |
Other nonresponse (%) | 1.4 | 1.6 | 1.2 | 1.7 | 1.7 | 1.7 | 1.5 | 6.1 | 4.2 |
2013 P18R1 |
2014 P19R1 |
2015 P20R1 |
2016 P21R1 |
2017 P22R1 |
2018 P23R1 |
2019 P24R1 |
2020 P25R1 |
2021 P26R1 |
|
---|---|---|---|---|---|---|---|---|---|
Original NHIS sample (N) | 9,951 | 9,970 | 10,854 | 9,851 | 9,835 | 9,839 | 9,864 | 9,866 | 9,509 |
Percent complete in NHIS | 78.1 | 81.9 | 80.6 | 77.6 | 81.0 | 80.4 | 84.2 | 89.3 | 85.3 |
Percent partial complete in NHIS | 21.9 | 18.1 | 19.4 | 22.4 | 19.0 | 19.6 | 15.8 | 10.7 | 14.7 |
MEPS Round 1 response rate: | |||||||||
Percent complete for NHIS completes | 76.9 | 74.5 | 75.9 | 77.3 | 75.4 | 75.4 | 73.5 | 63.5 | 63.1 |
Percent complete for NHIS partial completes | 64.5 | 58.9 | 63.1 | 64.8 | 62.0 | 63.6 | 60.3 | 46.8 | 44.1 |
Note: Figures shown are based on original NHIS sample and exclude reporting units added to the sample as “splits” and “students.”
Panel | Net sample (N) | Ever refused (%) | Converted (%) | Final refusal rate (%) | Final response rate (%) |
---|---|---|---|---|---|
Panel 15 | 9,258 | 29.4 | 26.6 | 21.0 | 73.5 |
Panel 16 | 10,940 | 26.3 | 30.9 | 17.6 | 78.2 |
Panel 17 | 10,386 | 25.3 | 30.2 | 17.2 | 78.2 |
Panel 18 | 10,357 | 25.5 | 25.0 | 18.1 | 74.2 |
Panel 19 | 10,418 | 30.1 | 23.3 | 22.4 | 71.8 |
Panel 20 | 11,318 | 30.1 | 29.2 | 21.0 | 73.5 |
Panel 21 | 10,316 | 29.1 | 29.0 | 20.2 | 74.4 |
Panel 22 | 10,169 | 30.1 | 27.6 | 21.8 | 72.6 |
Panel 23 | 10,089 | 31.3 | 25.6 | 22.4 | 72.9 |
Panel 24 | 10,090 | 32.6 | 23.4 | 24.2 | 71.2 |
Panel 25 | 10,152 | 34.8 | 12.3 | 28.9 | 61.7 |
Panel 26 | 9,795 | 40.4 | 19.3 | 31.4 | 60.0 |
Panel | Total sample (N) | Ever traced (%) | Not located (%) |
---|---|---|---|
Panel 15 | 9,415 | 16.7 | 4.1 |
Panel 16 | 11,019 | 18.2 | 3.0 |
Panel 17 | 10,513 | 18.7 | 3.6 |
Panel 18 | 10,468 | 16.0 | 4.3 |
Panel 19 | 10,532 | 19.5 | 4.1 |
Panel 20 | 11,435 | 14.0 | 4.3 |
Panel 21 | 10,405 | 12.8 | 3.7 |
Panel 22 | 10,228 | 13.0 | 3.9 |
Panel 23 | 10,199 | 12.7 | 3.0 |
Panel 24 | 10,172 | 12.6 | 3.0 |
Panel 25 | 10,230 | 11.7 | 3.2 |
Panel 26 | 9,863 | 11.3 | 4.3 |
Round | Panel 16 | Panel 17 | Panel 18 | Panel 19 | Panel 20 | Panel 21 | Panel 22 | Panel 23 | Panel 24 | Panel 25 | Panel 26 |
---|---|---|---|---|---|---|---|---|---|---|---|
Round 1 | 74.0 | 67.8 | 78.0 | 85.5 | 76.4 | 75.5 | 79.9 | 78.1 | 79.5 | 89.0 | 92.9 |
Round 2 | 88.1 | 90.2 | 102.9 | 92.3 | 86.3 | 85.3 | 88.8 | 88.2 | 87.0 | 89.7 | 93.3 |
Round 3 | 87.2 | 94.3 | 103.1 | 94.5 | 89.7 | 93.4 | 93.0 | 92.6 | 98.5 | 100.0 | |
Round 4 | 85.9 | 99.6 | 89.0 | 84.6 | 80.5 | 82.7 | 84.3 | 86.8 | 86.2 | 93.2 | |
Round 5 | 85.4 | 92.2 | 87.4 | 84.1 | 85.3 | 77.4 | 78.8 | 78.7 | 97.1 | ||
Round 6 | 88.4 | 89.7 | |||||||||
Round 7 | 96.6 | ||||||||||
Round 8 | 90.1 |
Contact type | Panel 20, Round 1 | Panel 21, Round 1 | Panel 22, Round 1 | Panel 23, Round 1 | Panel 24, Round 1 | Panel 25, Round 1 | Panel 26, Round 1 | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
All RUs | Complete | Partial | All RUs | Complete | Partial | All RUs | Complete | Partial | All RUs | Complete | Partial | All RUs | Complete | Partial | All RUs | Complete | Partial | All RUs | Complete | Partial | |
N | 10,854 | 8,751 | 2,103 | 9,851 | 7,645 | 2,206 | 9,835 | 7,963 | 1,872 | 9,839 | 7,913 | 1,926 | 9,864 | 8,306 | 1,558 | 9,866 | 8,814 | 1,052 | 9,509 | 8,113 | 1,396 |
% of all RUs | 100 | 81.0 | 19.0 | 100 | 77.6 | 22.4 | 100 | 81.0 | 19.0 | 100 | 80.4 | 19.6 | 100 | 84.2 | 15.8 | 100 | 89.3 | 10.7 | 100 | 85.3 | 14.7 |
In-person | 7.2 | 6.9 | 8.5 | 7.0 | 6.9 | 8.3 | 6.3 | 6.1 | 7.3 | 6.2 | 6.0 | 7.2 | 5.5 | 5.4 | 6.3 | 2.6 | 2.5 | 2.6 | 2.4 | 2.3 | 3.1 |
Telephone | 2.1 | 2.0 | 2.5 | 2.0 | 1.9 | 2.4 | 1.5 | 1.5 | 1.7 | 1.5 | 1.4 | 1.7 | 1.3 | 1.2 | 1.6 | 9.7 | 9.5 | 11.6 | 8.8 | 8.7 | 9.8 |
Total | 9.6 | 9.2 | 11.4 | 9.3 | 8.9 | 11.0 | 8.4 | 8.1 | 9.6 | 8.2 | 7.9 | 9.5 | 7.3 | 7.1 | 8.5 | 14.4 | 14.1 | 17.0 | 13.1 | 12.8 | 14.9 |
Panel/round | Authorization forms requested | Authorization forms signed | Signing rate (%) | |
---|---|---|---|---|
Panel 1 | Round 1 | 3,562 | 2,624 | 73.7 |
Round 2 | 19,874 | 14,145 | 71.2 | |
Round 3 | 17,722 | 12,062 | 68.1 | |
Round 4 | 17,133 | 10,542 | 61.5 | |
Round 5 | 12,544 | 6,763 | 53.9 | |
Panel 2 | Round 1 | 2,735 | 1,788 | 65.4 |
Round 2 | 13,461 | 9,433 | 70.1 | |
Round 3 | 11,901 | 7,537 | 63.3 | |
Round 4 | 11,164 | 6,485 | 58.1 | |
Round 5 | 8,104 | 4,244 | 52.4 | |
Panel 3 | Round 1 | 2,078 | 1,349 | 64.9 |
Round 2 | 10,335 | 6,463 | 62.5 | |
Round 3 | 8,716 | 4,797 | 55.0 | |
Round 4 | 8,761 | 4,246 | 48.5 | |
Round 5 | 6,913 | 2,911 | 42.1 | |
Panel 4 | Round 1 | 2,400 | 1,607 | 67.0 |
Round 2 | 12,711 | 8,434 | 66.4 | |
Round 3 | 11,078 | 6,642 | 60.0 | |
Round 4 | 11,047 | 6,888 | 62.4 | |
Round 5 | 8,684 | 5,096 | 58.7 | |
Panel 5 | Round 1 | 1,243 | 834 | 67.1 |
Round 2 | 14,008 | 9,618 | 68.7 | |
Round 3 | 12,869 | 8,301 | 64.5 | |
Round 4 | 13,464 | 9,170 | 68.1 | |
Round 5 | 10,888 | 7,025 | 64.5 | |
Panel 6 | Round 1 | 2,783 | 2,012 | 72.3 |
Round 2 | 29,861 | 22,872 | 76.6 | |
Round 3 | 26,068 | 18,219 | 69.9 | |
Round 4 | 27,146 | 20,082 | 74.0 | |
Round 5 | 21,022 | 14,581 | 69.4 | |
Panel 7 | Round 1 | 2,298 | 1,723 | 75.0 |
Round 2 | 22,302 | 17,557 | 78.7 | |
Round 3 | 19,312 | 13,896 | 72.0 | |
Round 4 | 16,934 | 13,725 | 81.1 | |
Round 5 | 14,577 | 11,099 | 76.1 | |
Panel 8 | Round 1 | 2,287 | 1,773 | 77.5 |
Round 2 | 22,533 | 17,802 | 79.0 | |
Round 3 | 19,530 | 14,064 | 72.0 | |
Round 4 | 19,718 | 14,599 | 74.0 | |
Round 5 | 15,856 | 11,106 | 70.0 | |
Panel 9 | Round 1 | 2,253 | 1,681 | 74.6 |
Round 2 | 22,668 | 17,522 | 77.3 | |
Round 3 | 19,601 | 13,672 | 69.8 | |
Round 4 | 20,147 | 14,527 | 72.1 | |
Round 5 | 15,963 | 10,720 | 67.2 | |
Panel 10 | Round 1 | 2,068 | 1,443 | 69.8 |
Round 2 | 22,582 | 17,090 | 75.7 | |
Round 3 | 18,967 | 13,396 | 70.6 | |
Round 4 | 19,087 | 13,296 | 69.7 | |
Round 5 | 15,787 | 10,476 | 66.4 | |
Panel 11 | Round 1 | 2,154 | 1,498 | 69.5 |
Round 2 | 23,957 | 17,742 | 74.1 | |
Round 3 | 20,756 | 13,400 | 64.6 | |
Round 4 | 21,260 | 14,808 | 69.7 | |
Round 5 | 16,793 | 11,482 | 68.4 | |
Panel 12 | Round 1 | 1,695 | 1,066 | 62.9 |
Round 2 | 17,787 | 12,524 | 70.4 | |
Round 3 | 15,291 | 10,006 | 65.4 | |
Round 4 | 15,692 | 10,717 | 68.3 | |
Round 5 | 12,780 | 8,367 | 65.5 | |
Panel 13 | Round 1 | 2,217 | 1,603 | 72.3 |
Round 2 | 24,357 | 18,566 | 76.2 | |
Round 3 | 21,058 | 14,826 | 70.4 | |
Round 4 | 21,673 | 15,632 | 72.1 | |
Round 5 | 17,158 | 11,779 | 68.7 | |
Panel 14 | Round 1 | 2,128 | 1,498 | 70.4 |
Round 2 | 23,138 | 17,739 | 76.7 | |
Round 3 | 19,024 | 13,673 | 71.9 | |
Round 4 | 18,532 | 12,824 | 69.2 | |
Round 5 | 15,444 | 10,201 | 66.1 | |
Panel 15 | Round 1 | 1,680 | 1,136 | 67.6 |
Round 2 | 18,506 | 13,628 | 73.6 | |
Round 3 | 16,686 | 11,652 | 69.8 | |
Round 4 | 16,260 | 11,139 | 68.5 | |
Round 5 | 13,443 | 8,420 | 62.6 | |
Panel 16 | Round 1 | 1,811 | 1,223 | 67.5 |
Round 2 | 23,718 | 17,566 | 74.1 | |
Round 3 | 21,780 | 14,828 | 68.1 | |
Round 4 | 21,537 | 16,329 | 75.8 | |
Round 5 | 16,688 | 12,028 | 72.1 | |
Panel 17 | Round 1 | 1,655 | 1,117 | 67.5 |
Round 2 | 21,749 | 17,694 | 81.4 | |
Round 3 | 19,292 | 15,125 | 78.4 | |
Round 4 | 20,086 | 15,691 | 78.1 | |
Round 5 | 15,064 | 11,873 | 78.8 | |
Panel 18 | Round 1 | 1,677 | 1,266 | 75.5 |
Round 2 | 22,714 | 18,043 | 79.4 | |
Round 3 | 20,728 | 15,827 | 76.4 | |
Round 4 | 17,092 | 13,704 | 80.2 | |
Round 5 | 15,448 | 11,796 | 76.4 | |
Panel 19 | Round 1 | 2,189 | 1,480 | 67.6 |
Round 2 | 22,671 | 17,190 | 75.8 | |
Round 3 | 20,582 | 14,534 | 70.6 | |
Round 4 | 17,102 | 13,254 | 77.5 | |
Round 5 | 15,330 | 11,425 | 74.5 | |
Panel 20 | Round 1 | 2,354 | 1,603 | 68.1 |
Round 2 | 25,334 | 18,479 | 72.9 | |
Round 3 | 22,851 | 15,862 | 69.4 | |
Round 4 | 18,234 | 14,026 | 76.9 | |
Round 5 | 16,274 | 12,100 | 74.4 | |
Panel 21 | Round 1 | 2,037 | 1,396 | 68.5 |
Round 2 | 22,984 | 17,295 | 75.2 | |
Round 3 | 20,802 | 14,898 | 71.6 | |
Round 4 | 16,487 | 13,110 | 79.5 | |
Round 5 | 20,443 | 16,247 | 79.5 | |
Panel 22 | Round 1 | 2,274 | 1,573 | 69.2 |
Round 2 | 22,913 | 17,530 | 76.5 | |
Round 3 | 26,436 | 19,496 | 73.7 | |
Round 4 | 23,249 | 18,097 | 77.8 | |
Round 5 | 17,171 | 12,168 | 70.9 | |
Panel 23 | Round 1 | 1,982 | 1,533 | 77.3 |
Round 2 | 29,576 | 21,850 | 73.9 | |
Round 3 | 23,365 | 14,475 | 62.4 | |
Round 4 | 19,220 | 13,483 | 70.2 | |
Round 5 | 17,569 | 10,903 | 62.1 | |
Round 6 | 12,701 | 8,002 | 63.0 | |
Round 7 | 13,254 | 8,108 | 61.2 | |
Round 8 | 11,589 | 7,624 | 65.8 | |
Panel 24 | Round 1 | 2,285 | 1,306 | 57.2 |
Round 2 | 24,755 | 15,865 | 64.1 | |
Round 3 | 22,657 | 11,522 | 50.9 | |
Round 4 | 14,612 | 7,716 | 52.8 | |
Round 5 | 15,992 | 8,941 | 55.9 | |
Round 6 | 11,366 | 6,658 | 58.6 | |
Panel 25 | Round 1 | 3,110 | 1,242 | 39.9 |
Round 2 | 15,259 | 7,292 | 47.8 | |
Round 3 | 15,932 | 8,100 | 50.8 | |
Round 4 | 11,252 | 7,204 | 64.0 | |
Panel 26 | Round 1 | 2,432 | 1,151 | 47.3 |
Round 2 | 17,765 | 10,564 | 59.5 |
Panel/round | Permission forms requested | Permission forms signed | Signing rate (%) | |
---|---|---|---|---|
Panel 1 | Round 3 | 19,913 | 14,468 | 72.7 |
Round 5 | 8,685 | 6,002 | 69.1 | |
Panel 2 | Round 3 | 12,241 | 8,694 | 71.0 |
Round 5 | 8,640 | 6,297 | 72.9 | |
Panel 3 | Round 3 | 9,016 | 5,929 | 65.8 |
Round 5 | 7,569 | 5,200 | 68.7 | |
Panel 4 | Round 3 | 11,856 | 8,280 | 69.8 |
Round 5 | 10,688 | 8,318 | 77.8 | |
Panel 5 | Round 3 | 9,248 | 6,852 | 74.1 |
Round 5 | 8,955 | 7,174 | 80.1 | |
Panel 6 | Round 3 | 19,305 | 15,313 | 79.3 |
Round 5 | 17,981 | 14,864 | 82.7 | |
Panel 7 | Round 3 | 14,456 | 11,611 | 80.3 |
Round 5 | 13,428 | 11,210 | 83.5 | |
Panel 8 | Round 3 | 14,391 | 11,533 | 80.1 |
Round 5 | 13,422 | 11,049 | 82.3 | |
Panel 9 | Round 3 | 14,334 | 11,189 | 78.1 |
Round 5 | 13,416 | 10,893 | 81.2 | |
Panel 10 | Round 3 | 13,928 | 10,706 | 76.9 |
Round 5 | 12,869 | 10,260 | 79.7 | |
Panel 11 | Round 3 | 14,937 | 11,328 | 75.8 |
Round 5 | 13,778 | 11,332 | 82.3 | |
Panel 12 | Round 3 | 10,840 | 8,242 | 76.0 |
Round 5 | 9,930 | 8,015 | 80.7 | |
Panel 13 | Round 3 | 15,379 | 12,165 | 79.1 |
Round 4 | 10,782 | 7,795 | 72.3 | |
Round 5 | 9,451 | 6,635 | 70.2 | |
Panel 14 | Round 2 | 11,841 | 9,151 | 77.3 |
Round 3 | 9,686 | 7,091 | 73.2 | |
Round 4 | 9,298 | 6,623 | 71.2 | |
Round 5 | 8,415 | 6,011 | 71.4 | |
Panel 15 | Round 2 | 9,698 | 7,092 | 73.1 |
Round 3 | 8,684 | 6,189 | 71.3 | |
Round 4 | 8,163 | 5,756 | 70.5 | |
Round 5 | 7,302 | 4,485 | 66.9 | |
Panel 16 | Round 2 | 12,093 | 8,892 | 73.5 |
Round 3 | 10,959 | 7,591 | 69.3 | |
Round 4 | 10,432 | 8,194 | 78.6 | |
Round 5 | 8,990 | 6,928 | 77.1 | |
Panel 17 | Round 2 | 14,181 | 12,567 | 88.6 |
Round 3 | 9,715 | 7,580 | 78.0 | |
Round 4 | 9,759 | 7,730 | 79.2 | |
Round 5 | 8,245 | 6,604 | 80.1 | |
Panel 18 | Round 2 | 10,977 | 8,755 | 79.8 |
Round 3 | 9,757 | 7,573 | 77.6 | |
Round 4 | 8,526 | 6,858 | 80.4 | |
Round 5 | 7,918 | 6,173 | 78.0 | |
Panel 19 | Round 2 | 10,749 | 8,261 | 76.9 |
Round 3 | 9,618 | 6,902 | 71.8 | |
Round 4 | 8,557 | 6,579 | 76.9 | |
Round 5 | 7,767 | 5,905 | 76.0 | |
Panel 20 | Round 2 | 12,074 | 8,796 | 72.9 |
Round 3 | 10,577 | 7,432 | 70.3 | |
Round 4 | 9,0994 | 6,945 | 76.3 | |
Round 5 | 8,312 | 6,339 | 76.3 | |
Panel 21 | Round 2 | 10,783 | 7,985 | 74.1 |
Round 3 | 9,540 | 6,847 | 71.8 | |
Round 4 | 8,172 | 6,387 | 78.2 | |
Round 5 | 6,684 | 5,336 | 79.8 | |
Panel 22 | Round 2 | 10,510 | 7,919 | 75.4 |
Round 3 | 8,053 | 5,953 | 73.9 | |
Round 4 | 7,284 | 5,670 | 77.8 | |
Round 5 | 5,726 | 71.1 | ||
Panel 23 | Round 2 | 8,834 | 6,514 | 73.8 |
Round 3 | 9,614 | 6,205 | 64.5 | |
Round 4 | 8,486 | 5,900 | 69.5 | |
Round 5 | 8,067 | 5,101 | 63.2 | |
Round 6 | 5,668 | 3,418 | 60.3 | |
Round 7 | 5,417 | 3,345 | 61.8 | |
Round 8 | 5,182 | 3,341 | 64.5 | |
Panel 24 | Round 2 | 10,265 | 6,676 | 65.0 |
Round 3 | 9,096 | 4,831 | 53.1 | |
Round 4 | 7,100 | 3,636 | 51.2 | |
Round 5 | 6,528 | 3,682 | 56.4 | |
Round 6 | 4,783 | 2,663 | 55.7 | |
Panel 25 | Round 2 | 6,783 | 3,180 | 46.9 |
Round 3 | 6,114 | 3,146 | 51.5 | |
Round 4 | 4,640 | 2,888 | 62.2 | |
Panel 26 | Round 2 | 6,961 | 4,105 | 59.0 |
Panel/round | SAQs requested | SAQs completed | SAQs refused | Other nonresponse | Response rate (%) | |
---|---|---|---|---|---|---|
Panel 1 | Round 2 | 16,577 | 9,910 | - | - | 59.8 |
Round 3 | 6,032 | 1,469 | 840 | 3,723 | 24.3 | |
Combined, 1996 | 16,577 | 11,379 | - | - | 68.6 | |
Panel 4* | Round 4 | 13,936 | 12,265 | 288 | 1,367 | 87.9 |
Round 5 | 1,683 | 947 | 314 | 422 | 56.3 | |
Combined, 2000 | 13,936 | 13,212 | - | - | 94.8 | |
Panel 5* | Round 2 | 11,239 | 9,833 | 191 | 1,213 | 86.9 |
Round 3 | 1,314 | 717 | 180 | 417 | 54.6 | |
Combined, 2000 | 11,239 | 10,550 | - | - | 93.9 | |
Round 4 | 7,812 | 6,790 | 198 | 824 | 86.9 | |
Round 5 | 1,022 | 483 | 182 | 357 | 47.3 | |
Combined, 2001 | 7,812 | 7,273 | 380 | 1,181 | 93.1 | |
Panel 6 | Round 2 | 16,577 | 14,233 | 412 | 1,932 | 85.9 |
Round 3 | 2,143 | 1,213 | 230 | 700 | 56.6 | |
Combined, 2001 | 16,577 | 15,446 | 642 | 2,632 | 93.2 | |
Round 4 | 15,687 | 13,898 | 362 | 1,427 | 88.6 | |
Round 5 | 1,852 | 967 | 377 | 508 | 52.2 | |
Combined, 2002 | 15,687 | 14,865 | 739 | 1,935 | 94.8 | |
Panel 7 | Round 2 | 12,093 | 10,478 | 196 | 1,419 | 86.6 |
Round 3 | 1,559 | 894 | 206 | 459 | 57.3 | |
Combined, 2002 | 12,093 | 11,372 | 402 | 1,878 | 94.0 | |
Round 4 | 11,703 | 10,125 | 285 | 1,292 | 86.5 | |
Round 5 | 1,493 | 786 | 273 | 434 | 52.7 | |
Combined, 2003 | 11,703 | 10,911 | 558 | 1,726 | 93.2 | |
Panel 8 | Round 2 | 12,533 | 10,765 | 203 | 1,565 | 85.9 |
Round 3 | 1,568 | 846 | 234 | 488 | 54.0 | |
Combined, 2003 | 12,533 | 11,611 | 437 | 2,053 | 92.6 | |
Round 4 | 11,996 | 10,534 | 357 | 1,105 | 87.8 | |
Round 5 | 1,400 | 675 | 344 | 381 | 48.2 | |
Combined, 2004 | 11,996 | 11,209 | 701 | 1,486 | 93.4 | |
Panel 9 | Round 2 | 12,541 | 10,631 | 381 | 1,529 | 84.8 |
Round 3 | 1,670 | 886 | 287 | 496 | 53.1 | |
Combined, 2004 | 12,541 | 11,517 | 668 | 2,025 | 91.9 | |
Round 4 | 11,913 | 10,357 | 379 | 1,177 | 86.9 | |
Round 5 | 1,478 | 751 | 324 | 403 | 50.8 | |
Combined, 2005 | 11,913 | 11,108 | 703 | 1,580 | 93.2 | |
Panel 10 | Round 2 | 12,360 | 10,503 | 391 | 1,466 | 85.0 |
Round 3 | 1,626 | 787 | 280 | 559 | 48.4 | |
Combined, 2005 | 12,360 | 11,290 | 671 | 2025 | 91.3 | |
Round 4 | 11,726 | 10,081 | 415 | 1,230 | 86.0 | |
Round 5 | 1,516 | 696 | 417 | 403 | 45.9 | |
Combined, 2006 | 11,726 | 10,777 | 832 | 1,633 | 91.9 | |
Panel 11 | Round 2 | 13,146 | 10,924 | 452 | 1,770 | 83.1 |
Round 3 | 1,908 | 948 | 349 | 611 | 49.7 | |
Combined, 2006 | 13,146 | 11,872 | 801 | 2,381 | 90.3 | |
Round 4 | 12,479 | 10,771 | 622 | 1,086 | 86.3 | |
Round 5 | 1,621 | 790 | 539 | 292 | 48.7 | |
Combined, 2007 | 12,479 | 11,561 | 1,161 | 1,378 | 92.6 | |
Panel 12 | Round 2 | 10,061 | 8,419 | 502 | 1,140 | 83.7 |
Round 3 | 1,460 | 711 | 402 | 347 | 48.7 | |
Combined, 2007 | 10,061 | 9,130 | 904 | 1,487 | 90.7 | |
Round 4 | 9,550 | 8,303 | 577 | 670 | 86.9 | |
Round 5 | 1,145 | 541 | 415 | 189 | 47.3 | |
Combined, 2008 | 9,550 | 8,844 | 992 | 859 | 92.6 | |
Panel 13 | Round 2 | 14,410 | 12,541 | 707 | 1,162 | 87.0 |
Round 3 | 1,630 | 829 | 439 | 362 | 50.9 | |
Combined, 2008 | 14,410 | 13,370 | 1,146 | 1,524 | 92.8 | |
Round 4 | 13,822 | 12,311 | 559 | 952 | 89.1 | |
Round 5 | 1,364 | 635 | 476 | 253 | 46.6 | |
Combined, 2009 | 13,822 | 12,946 | 1,705 | 1205 | 93.7 | |
Panel 14 | Round 2 | 13,335 | 11,528 | 616 | 1,191 | 86.5 |
Round 3 | 1,542 | 818 | 426 | 298 | 53.1 | |
Combined, 2009 | 13,335 | 12,346 | 1042 | 1,489 | 92.6 | |
Round 4 | 12,527 | 11,041 | 644 | 839 | 88.1 | |
Round 5 | 1,403 | 645 | 497 | 261 | 46.0 | |
Combined, 2010 | 12,527 | 11,686 | 1,141 | 1,100 | 93.3 | |
Panel 15 | Round 2 | 11,857 | 10,121 | 637 | 1,096 | 85.4 |
Round 3 | 1,491 | 725 | 425 | 341 | 48.6 | |
Combined, 2010 | 11,857 | 10,846 | 1,062 | 1,437 | 91.5 | |
Round 4 | 11,311 | 9,804 | 572 | 935 | 86.7 | |
Round 5 | 1,418 | 678 | 461 | 279 | 47.8 | |
Combined, 2011 | 11,311 | 10,482 | 1,033 | 1,214 | 92.6 | |
Panel 16 | Round 2 | 15,026 | 12,926 | 707 | 1393 | 86.0 |
Round 3 | 1,863 | 949 | 465 | 449 | 50.9 | |
Combined, 2011 | 15,026 | 13,875 | 1,172 | 728 | 92.3 | |
Round 4 | 13,620 | 12,415 | 582 | 623 | 91.2 | |
Round 5 | 1,112 | 516 | 442 | 154 | 46.4 | |
Combined, 2012 | 13,620 | 12,931 | 1,024 | 777 | 94.9 | |
Panel 17 | Round 2 | 14,181 | 12,567 | 677 | 937 | 88.6 |
Round 3 | 1,395 | 690 | 417 | 288 | 49.5 | |
Combined, 2012 | 14,181 | 13,257 | 1,094 | 1,225 | 93.5 | |
Round 4 | 13,086 | 11,566 | 602 | 918 | 88.4 | |
Round 5 | 1,429 | 655 | 504 | 270 | 45.8 | |
Combined, 2013 | 13,086 | 12,221 | 1,106 | 1,188 | 93.4 | |
Panel 18 | Round 2 | 13,158 | 10,805 | 785 | 1,568 | 82.1 |
Round 3 | 2,066 | 1,022 | 547 | 497 | 48.5 | |
Combined, 2013 | 13,158 | 11,827 | 1,332 | 2,065 | 89.9 | |
Round 4 | 12,243 | 10,050 | 916 | 1,277 | 82.1 | |
Round 5 | 2,063 | 936 | 721 | 406 | 45.4 | |
Combined, 2014 | 12,243 | 10,986 | 1,637 | 1,683 | 89.7 | |
Panel 19 | Round 2 | 12,664 | 10,047 | 1,014 | 1,603 | 79.3 |
Round 3 | 2,306 | 1,050 | 694 | 615 | 44.5 | |
Combined, 2014 | 12,664 | 11,097 | 1,708 | 2,218 | 87.6 | |
Round 4 | 11,782 | 9,542 | 1,047 | 1,175 | 81.0 | |
Round 5 | 2,131 | 894 | 822 | 414 | 42.0 | |
Combined, 2015 | 11,782 | 10,436 | 1,869 | 1,589 | 88.6 | |
Panel 20 | Round 2 | 14,077 | 10,885 | 1,223 | 1,966 | 77.3 |
Round 3 | 2,899 | 1,329 | 921 | 649 | 45.8 | |
Combined, 2015 | 14,077 | 12,214 | 2,144 | 2,615 | 86.8 | |
Round 4 | 13,068 | 10,572 | 1,127 | 1,371 | 80.9 | |
Round 5 | 2,262 | 1,001 | 891 | 370 | 44.3 | |
Combined, 2016 | 13,068 | 11,573 | 2,018 | 1,741 | 88.6 | |
Panel 21 | Round 2 | 13,143 | 10,212 | 1,170 | 1,761 | 77.7 |
Round 3 | 2,585 | 1,123 | 893 | 569 | 43.4 | |
Combined, 2016 | 13,143 | 11,335 | 2,063 | 2,330 | 86.2 | |
Round 4 | 12,021 | 9,966 | 1,149 | 906 | 82.9 | |
Round 5 | 2,078 | 834 | 884 | 360 | 40.1 | |
Combined, 2017 | 12,021 | 10,800 | 2,033 | 1,266 | 89.8 | |
Panel 22 | Round 2 | 12,304 | 9,929 | 1,086 | 1,289 | 80.7 |
Round 3 | 2,287 | 840 | 749 | 698 | 36.7 | |
Combined, 2017 | 12,304 | 10,769 | 1,835 | 1,987 | 87.5 | |
Round 4 | 11,333 | 8,341 | 1,159 | 1,833 | 73.6 | |
Round 5 | 2,090 | 811 | 896 | 383 | 38.8 | |
Combined, 2018 | 11,333 | 9,152 | 2,055 | 2,216 | 80.8 | |
Panel 23 | Round 2 | 12,349 | 8,711 | 1,364 | 1,289 | 70.5 |
Round 3 | 2,364 | 819 | 907 | 638 | 34.6 | |
Combined, 2018 | 12,369 | 9,530 | 2,271 | 1,927 | 77.2 | |
Round 4 | 11,290 | 8,554 | 1,515 | 1,221 | 75.8 | |
Round 5 | 2,711 | 983 | 923 | 805 | 36.3 | |
Combined, 2019 | 11,290 | 9,537 | 2,438 | 2,026 | 84.5 | |
Round 6 | 8,537 | 4,732 | 682 | 3,123 | 55.4 | |
Round 7 | 3,229 | 1,123 | 707 | 1,399 | 34.8 | |
Combined, 2020 | 8,537 | 5,855 | 1,389 | 4,522 | 68.6 | |
Round 8 | 6,446 | 3,377 | 799 | 2,270 | 52.4 | |
Panel 24 | Round 2 | 12,027 | 8,726 | 1,641 | 1,660 | 72.6 |
Round 3 | 2,810 | 860 | 832 | 1,118 | 30.6 | |
Combined, 2019 | 12,027 | 9,586 | 2,473 | 2,778 | 79.7 | |
Round 4 | 9,257 | 4,247 | 786 | 4,224 | 45.9 | |
Round 5 | 4,224 | 1,476 | 838 | 1,910 | 34.9 | |
Combined, 2020 | 9,257 | 5,723 | 1,624 | 6,134 | 61.8 | |
Round 6 | 6,440 | 3,196 | 819 | 2,425 | 49.6 | |
Panel 25 | Round 2 | 8,109 | 3,555 | 529 | 4,025 | 43.8 |
Round 3 | 4,016 | 1,322 | 717 | 1,977 | 32.9 | |
Combined, 2020 | 8,109 | 4,877 | 1,246 | 6,002 | 60.1 | |
Round 4 | 6,089 | 3,309 | 850 | 1,930 | 54.3 | |
Panel 26 | Round 2 | 8,419 | 4,609 | 1,009 | 2,801 | 54.7 |
* Totals represent combined collection of the SAQ and the parent-administered questionnaire (PAQ).
Panel/round | DCSs requested | DCSs completed | Response rate (%) | |
---|---|---|---|---|
Panel 4 | Round 5 | 696 | 631 | 90.7 |
Panel 5 | Round 3 | 550 | 508 | 92.4 |
Round 5 | 570 | 500 | 87.7 | |
Panel 6 | Round 3 | 1,166 | 1,000 | 85.8 |
Round 5 | 1,202 | 1,166 | 97.0 | |
Panel 7 | Round 3 | 870 | 848 | 97.5 |
Round 5 | 869 | 820 | 94.4 | |
Panel 8 | Round 3 | 971 | 885 | 91.1 |
Round 5 | 977 | 894 | 91.5 | |
Panel 9 | Round 3 | 1,003 | 909 | 90.6 |
Round 5 | 904 | 806 | 89.2 | |
Panel 10 | Round 3 | 1,060 | 939 | 88.6 |
Round 5 | 1,078 | 965 | 89.5 | |
Panel 11 | Round 3 | 1,188 | 1,030 | 86.7 |
Round 5 | 1,182 | 1,053 | 89.1 | |
Panel 12 | Round 3 | 917 | 825 | 90.0 |
Round 5 | 883 | 815 | 92.3 | |
Panel 13 | Round 3 | 1,278 | 1,182 | 92.5 |
Round 5 | 1,278 | 1,154 | 90.3 | |
Panel 14 | Round 3 | 1,174 | 1,048 | 89.3 |
Round 5 | 1,177 | 1,066 | 90.6 | |
Panel 15 | Round 3 | 1,117 | 1,000 | 89.5 |
Round 5 | 1,097 | 990 | 90.3 | |
Panel 16 | Round 3 | 1,425 | 1,283 | 90.0 |
Round 5 | 1,358 | 1,256 | 92.5 | |
Panel 17 | Round 3 | 1,315 | 1,177 | 89.5 |
Round 5 | 1,308 | 1,174 | 89.8 | |
Panel 18 | Round 3 | 1,362 | 1,182 | 86.8 |
Round 5 | 1,342 | 1,187 | 88.5 | |
Panel 19 | Round 3 | 1,272 | 1,124 | 88.4 |
Round 5 | 1,316 | 1,144 | 87.2 | |
Panel 20 | Round 3 | 1,412 | 1,190 | 84.5 |
Round 5 | 1,386 | 1,174 | 84.9 | |
Panel 21 | Round 3 | 1,422 | 1,170 | 82.5 |
Round 5 | 1,481 | 1,177 | 81.0 | |
Panel 22 | Round 3 | 1,453 | 1,074 | 73.9 |
Round 5 | 1,348 | 1,018 | 75.5 | |
Panel 23 | Round 3 | 1,464 | 1,101 | 75.2 |
Round 5 | 1,350 | 933 | 69.1 | |
Round 7 | 1,018 | 648 | 63.7 | |
Panel 24 | Round 3 | 1,350 | 843 | 62.4 |
Round 5 | 1,082 | 599 | 55.4 | |
Panel 25 | Round 3 | 963 | 514 | 53.4 |
Tables represent combined DCS/proxy DCS collection.
Pharmacy | Total number | Total received | Percent received | Total complete | Completes as a percent of total |
---|---|---|---|---|---|
2019 � P22R5 all mail collection | |||||
Total RUs | 921 | 173 | 18.8% | 125 | 13.6% |
Total Pairs | 1,387 | 199 | 14.3% | 183 | 13.2% |
2018 � P21R5 all mail collection | |||||
Total RUs | 2,920 | 417 | 20.7% | 316 | 15.6% |
Total Pairs | 4,116 | 486 | 16.6% | 425 | 14.5% |
2017 � P20R5 all mail collection | |||||
Total RUs | 1,953 | 342 | 17.5% | 254 | 13.0% |
Total Pairs | 2,723 | 372 | 13.7% | 326 | 12.0% |
2016 � P19R5 all mail collection | |||||
Total RUs | 2,038 | 374 | 18.4% | 285 | 14.0% |
Total Pairs | 2,854 | 430 | 15.1% | 394 | 13.8% |
2015 � P18R5 all mail collection | |||||
Total RUs | 1,404 | 260 | 18.5% | 186 | 13.2% |
Total Pairs | 2,042 | 289 | 14.2% | 255 | 12.5% |
2014 � P17R5 all mail collection | |||||
Total RUs | 2,230 | 372 | 16.7% | 269 | 12.1% |
Total Pairs | 3,233 | 443 | 13.7% | 386 | 11.9% |
2013 � P16R5 all mail collection | |||||
Total RUs | 2,014 | 417 | 20.7% | 316 | 15.6% |
Total Pairs | 2,911 | 486 | 16.6% | 425 | 14.5% |
2012 � P15R5 all mail collection | |||||
Total RUs | 1,390 | 290 | 20.8% | 203 | 14.6% |
Total Pairs | 1,990 | 348 | 17.4% | 290 | 14.5% |
Reason for call | Spring 2000 (Panel 5 Round 1, Panel 4 Round 3, Panel 3 Round 5) | Fall 2000 (Panel 5 Round 2, Panel 4 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address change | 23 | 4.0 | 13 | 8.3 | 8 | 5.7 |
Appointment | 37 | 6.5 | 26 | 16.7 | 28 | 19.9 |
Request callback | 146 | 25.7 | 58 | 37.2 | 69 | 48.9 |
Refusal | 183 | 32.2 | 20 | 12.8 | 12 | 8.5 |
Willing to participate | 10 | 1.8 | 2 | 1.3 | 0 | 0.0 |
Other | 157 | 27.6 | 35 | 22.4 | 8 | 5.7 |
Report a respondent deceased | 5 | 0.9 | 1 | 0.6 | 0 | 0.0 |
Request a Spanish-speaking interview | 8 | 1.4 | 1 | 0.6 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 16 | 11.3 |
Total | 569 | 156 | 141 |
Reason for call | Spring 2001 (Panel 6 Round 1, Panel 5 Round 3, Panel 4 Round 5) | Fall 2001 (Panel 6 Round 2, Panel 5 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 27 | 3.7 | 17 | 12.7 | 56 | 15.7 |
Appointment | 119 | 16.2 | 56 | 41.8 | 134 | 37.5 |
Request callback | 259 | 35.3 | 36 | 26.9 | 92 | 25.8 |
No message | 8 | 1.1 | 3 | 2.2 | 0 | 0.0 |
Other | 29 | 4.0 | 7 | 5.2 | 31 | 8.7 |
Request SAQ help | 0 | 0.0 | 2 | 1.5 | 10 | 2.8 |
Special needs | 5 | 0.7 | 3 | 2.2 | 0 | 0.0 |
Refusal | 278 | 37.9 | 10 | 7.5 | 25 | 7.0 |
Willing to participate | 8 | 1.1 | 0 | 0.0 | 9 | 2.5 |
Total | 733 | 134 | 357 |
Reason for call | Spring 2002 (Panel 7 Round 1, Panel 6 Round 3, Panel 5 Round 5) | Fall 2002 (Panel 7 Round 2, Panel 6 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 28 | 4.5 | 29 | 13.9 | 66 | 16.7 |
Appointment | 77 | 12.5 | 71 | 34.1 | 147 | 37.1 |
Request callback | 210 | 34.0 | 69 | 33.2 | 99 | 25.0 |
No message | 6 | 1.0 | 3 | 1.4 | 5 | 1.3 |
Other | 41 | 6.6 | 17 | 8.2 | 10 | 2.5 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 30 | 7.6 |
Special needs | 1 | 0.2 | 0 | 0.0 | 3 | 0.8 |
Refusal | 232 | 37.6 | 14 | 6.7 | 29 | 7.3 |
Willing to participate | 22 | 3.6 | 5 | 2.4 | 7 | 1.8 |
Total | 617 | 208 | 396 |
Reason for call | Spring 2003 (Panel 8 Round 1, Panel 7 Round 3, Panel 6 Round 5) | Fall 2003 (Panel 8 Round 2, Panel 7 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 20 | 4.2 | 33 | 13.7 | 42 | 17.9 |
Appointment | 83 | 17.5 | 87 | 36.1 | 79 | 33.8 |
Request callback | 165 | 34.9 | 100 | 41.5 | 97 | 41.5 |
No message | 16 | 3.4 | 7 | 2.9 | 6 | 2.6 |
Other | 9 | 1.9 | 8 | 3.3 | 3 | 1.3 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 1 | 0.4 |
Special needs | 5 | 1.1 | 0 | 0.0 | 0 | 0.0 |
Refusal | 158 | 33.4 | 6 | 2.5 | 6 | 2.6 |
Willing to participate | 17 | 3.6 | 0 | 0.0 | 0 | 0.0 |
Total | 473 | 241 | 234 |
Reason for call | Spring 2004 (Panel 9 Round 1, Panel 8 Round 3, Panel 7 Round 5) | Fall 2004 (Panel 9 Round 2, Panel 8 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 8 | 1.6 | 26 | 13.2 | 42 | 10.9 |
Appointment | 67 | 13.3 | 76 | 38.6 | 153 | 39.7 |
Request callback | 158 | 31.5 | 77 | 39.1 | 139 | 36.1 |
No message | 9 | 1.8 | 5 | 2.5 | 16 | 4.2 |
Other | 8 | 1.6 | 5 | 2.5 | 5 | 1.3 |
Proxy needed | 5 | 1.0 | 2 | 1.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 2 | 0.5 |
Special needs | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Refusal | 228 | 45.4 | 6 | 3.0 | 27 | 7.0 |
Willing to participate | 19 | 3.8 | 0 | 0.0 | 1 | 0.3 |
Total | 502 | 197 | 385 |
Reason for call | Spring 2005 (Panel 10 Round 1, Panel 9 Round 3, Panel 8 Round 5) | Fall 2005 (Panel 10 Round 2, Panel 9 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 16 | 3.3 | 23 | 8.7 | 27 | 6.8 |
Appointment | 77 | 15.7 | 117 | 44.3 | 177 | 44.4 |
Request callback | 154 | 31.4 | 88 | 33.3 | 126 | 31.6 |
No message | 14 | 2.9 | 11 | 4.2 | 28 | 7.0 |
Other | 13 | 2.7 | 1 | 0.4 | 8 | 2.0 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 1 | 0.3 |
Special needs | 1 | 0.2 | 1 | 0.4 | 0 | 0.0 |
Refusal | 195 | 39.8 | 20 | 7.6 | 30 | 7.5 |
Willing to participate | 20 | 4.1 | 3 | 1.1 | 2 | 0.5 |
Total | 490 | 264 | 399 |
Reason for call | Spring 2006 (Panel 11 Round 1, Panel 10 Round 3, Panel 9 Round 5) | Fall 2006 (Panel 11 Round 2, Panel 10 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 7 | 1.3 | 24 | 7.5 | 11 | 4.1 |
Appointment | 61 | 11.3 | 124 | 39.0 | 103 | 38.1 |
Request callback | 146 | 27.1 | 96 | 30.2 | 101 | 37.4 |
No message | 72 | 13.4 | 46 | 14.5 | 21 | 7.8 |
Other | 16 | 3.0 | 12 | 3.8 | 8 | 3.0 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 4 | 0.7 | 0 | 0.0 | 0 | 0.0 |
Refusal | 216 | 40.1 | 15 | 4.7 | 26 | 9.6 |
Willing to participate | 17 | 3.2 | 1 | 0.3 | 0 | 0.0 |
Total | 539 | 318 | 270 |
Reason for call | Spring 2007 (Panel 12 Round 1, Panel 11 Round 3, Panel 10 Round 5) | Fall 2007 (Panel 12 Round 2, Panel 11 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 8 | 2.1 | 21 | 7.3 | 23 | 7.6 |
Appointment | 56 | 14.6 | 129 | 44.8 | 129 | 42.6 |
Request callback | 72 | 18.8 | 75 | 26.0 | 88 | 29.0 |
No message | 56 | 14.6 | 37 | 12.8 | 33 | 10.9 |
Other | 20 | 5.2 | 15 | 5.2 | 6 | 2.0 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 5 | 1.3 | 0 | 0.0 | 1 | 0.3 |
Refusal | 160 | 41.8 | 10 | 3.5 | 21 | 6.9 |
Willing to participate | 6 | 1.6 | 1 | 0.3 | 2 | 0.7 |
Total | 383 | 288 | 303 |
Reason for call | Spring 2008 (Panel 13 Round 1, Panel 12 Round 3, Panel 11 Round 5) | Fall 2008 (Panel 13 Round 2, Panel 12 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 20 | 3.4 | 12 | 4.7 | 21 | 5.7 |
Appointment | 92 | 15.5 | 117 | 45.9 | 148 | 39.9 |
Request callback | 164 | 27.6 | 81 | 31.8 | 154 | 41.5 |
No message | 82 | 13.8 | 20 | 7.8 | 22 | 5.9 |
Other | 13 | 2.2 | 12 | 4.7 | 8 | 2.2 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 4 | 0.7 | 0 | 0.0 | 0 | 0.0 |
Refusal | 196 | 32.9 | 13 | 5.1 | 18 | 4.9 |
Willing to participate | 24 | 4.0 | 0 | 0.0 | 0 | 0.0 |
Total | 595 | 255 | 371 |
Reason for call | Spring 2009 (Panel 14 Round 1, Panel 13 Round 3, Panel 12 Round 5) | Fall 2009 (Panel 14 Round 2, Panel 13 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 10 | 2.2 | 13 | 4.3 | 19 | 5.1 |
Appointment | 49 | 10.8 | 87 | 29.0 | 153 | 41.1 |
Request callback | 156 | 34.4 | 157 | 52.3 | 153 | 41.1 |
No message | 48 | 10.6 | 23 | 7.7 | 20 | 5.4 |
Other | 3 | 0.7 | 8 | 2.7 | 3 | 0.8 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 4 | 0.9 | 0 | 0.0 | 0 | 0.0 |
Refusal | 183 | 40.3 | 11 | 3.7 | 24 | 6.5 |
Willing to participate | 1 | 0.2 | 1 | 0.3 | 0 | 0.0 |
Total | 454 | 300 | 372 |
Reason for call | Spring 2010 (Panel 15 Round 1, Panel 14 Round 3, Panel 13 Round 5) | Fall 2010 (Panel 15 Round 2, Panel 14 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 2 | 0.8 | 42 | 8.2 | 25 | 5.3 |
Appointment | 44 | 18.0 | 214 | 41.6 | 309 | 66.0 |
Request callback | 87 | 35.7 | 196 | 38.1 | 46 | 9.8 |
No message | 17 | 7.0 | 33 | 6.4 | 17 | 3.6 |
Other | 7 | 2.9 | 8 | 1.6 | 14 | 3.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 12 | 2.6 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 1 | 0.2 |
Special needs | 1 | 0.4 | 1 | 0.2 | 1 | 0.2 |
Refusal | 86 | 35.2 | 20 | 3.9 | 43 | 9.2 |
Willing to participate | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Total | 244 | 514 | 468 |
Reason for call | Spring 2011 (Panel 16 Round 1, Panel 15 Round 3, Panel 14 Round 5) | Fall 2011 (Panel 16 Round 2, Panel 15 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 16 | 3.4 | 46 | 8.0 | 72 | 9.8 |
Appointment | 175 | 37.6 | 407 | 71.0 | 466 | 63.5 |
Request callback | 81 | 17.4 | 63 | 11.0 | 69 | 9.4 |
No message | 24 | 5.2 | 26 | 4.5 | 23 | 3.1 |
Other | 12 | 2.6 | 8 | 1.4 | 25 | 3.4 |
Request SAQ help | 1 | 0.2 | 2 | 0.3 | 32 | 4.4 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 46 | 6.3 |
Special needs | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Refusal | 157 | 33.7 | 21 | 3.7 | 0 | 0.0 |
Willing to participate | 0 | 0.0 | 0 | 0 | 0.0 | |
Total | 466 | 573 | 734 |
Reason for call | Spring 2012 (Panel 17 Round 1, Panel 16 Round 3, Panel 15 Round 5) | Fall 2012 (Panel 17 Round 2, Panel 16 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 18 | 5.0 | 107 | 13.4 | 108 | 12.2 |
Appointment | 130 | 36.1 | 517 | 64.9 | 584 | 65.8 |
Request callback | 60 | 16.7 | 94 | 11.8 | 57 | 6.4 |
No message | 21 | 5.8 | 17 | 2.1 | 18 | 2.0 |
Other | 10 | 2.8 | 25 | 3.1 | 16 | 1.8 |
Proxy needed | 0 | 0.0 | 1 | 0.1 | 2 | 0.2 |
Request SAQ help | 2 | 0.6 | 6 | 0.8 | 42 | 4.7 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 1 | 0.3 | 0 | 0.0 | 0 | 0.0 |
Refusal | 117 | 32.5 | 30 | 3.8 | 60 | 6.8 |
Willing to participate | 1 | 0.3 | 0 | 0.0 | 0 | 0.0 |
Total | 360 | 797 | 887 |
Reason for call | Spring 2013 (Panel 18 Round 1, Panel 17 Round 3, Panel 16 Round 5) | Fall 2013 (Panel 18 Round 2, Panel 17 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 18 | 4.4 | 82 | 10.8 | 53 | 9.0 |
Appointment | 143 | 35.0 | 558 | 73.0 | 370 | 62.6 |
Request callback | 71 | 17.4 | 88 | 11.5 | 70 | 11.8 |
No message | 8 | 2.0 | 11 | 1.4 | 16 | 2.8 |
Other | 2 | 0.5 | 4 | .5 | 5 | 0.9 |
Proxy needed | 1 | 0.2 | 1 | 0.1 | 1 | 0.2 |
Request SAQ help | 1 | 0.2 | 0 | 0.0 | 31 | 5.3 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 2 | 0.5 | 0 | 0.0 | 2 | 0.3 |
Refusal | 162 | 39.5 | 19 | 2.5 | 43 | 7.3 |
Willing to participate | 1 | 0.2 | 1 | 0.1 | 0 | 0.0 |
Total | 409 | 764 | 591 |
Reason for call | Spring 2014 (Panel 19 Round 1, Panel 18 Round 3, Panel 17 Round 5) | Fall 2014 (Panel 19 Round 2, Panel 18 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 11 | 3.2 | 71 | 11.1 | 62 | 8.4 |
Appointment | 75 | 22.1 | 393 | 61.5 | 490 | 66.5 |
Request callback | 70 | 20.6 | 113 | 17.7 | 70 | 9.5 |
No message | 11 | 3.2 | 12 | 1.9 | 28 | 3.9 |
Other | 0 | 0.0 | 5 | 0.8 | 7 | 0.9 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Request SAQ help | 0 | 0.0 | 1 | 0.2 | 4 | 0.5 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Refusal | 165 | 48.5 | 44 | 6.9 | 74 | 10.0 |
Willing to participate | 8 | 2.4 | 0 | 0.0 | 1 | 0.1 |
Total | 340 | 639 | 737 |
Reason for call | Spring 2015 (Panel 20 Round 1, Panel 19 Round 3, Panel 18 Round 5) | Fall 2015 (Panel 20 Round 2, Panel 19 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 10 | 2.3 | 61 | 8.8 | 55 | 9.6 |
Appointment | 95 | 21.8 | 438 | 63.4 | 346 | 60.7 |
Request callback | 85 | 19.5 | 112 | 16.2 | 52 | 9.1 |
No message | 14 | 3.2 | 17 | 2.5 | 4 | 0.7 |
Other | 2 | 0.5 | 3 | 0.4 | 3 | 0.5 |
Proxy needed | 1 | 0.2 | 7 | 1.0 | 8 | 1.4 |
Request SAQ help | 1 | 0.2 | 3 | 0.4 | 11 | 1.9 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Refusal | 206 | 47.2 | 47 | 6.8 | 91 | 16.0 |
Willing to participate | 22 | 5.0 | 3 | 0.4 | 0 | 0.0 |
Total | 436 | 691 | 570 |
Reason for call | Spring 2016 (Panel 21 Round 1, Panel 20 Round 3, Panel 19 Round 5) | Fall 2016 (Panel 21 Round 2, Panel 20 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 8 | 2.7 | 64 | 11.7 | 48 | 7.9 |
Appointment | 93 | 30.9 | 362 | 66.2 | 373 | 61.7 |
Request callback | 47 | 15.6 | 59 | 10.8 | 83 | 13.7 |
No message | 1 | 0.3 | 7 | 1.3 | 6 | 1.0 |
Other | 2 | 0.7 | 1 | 0.2 | 3 | 0.5 |
Proxy needed | 0 | 0.0 | 5 | 0.9 | 6 | 1.0 |
Request SAQ help | 0 | 0.0 | 3 | 0.5 | 11 | 1.8 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 1 | 0.3 | 0 | 0.0 | 0 | 0.0 |
Refusal | 139 | 46.2 | 46 | 8.4 | 75 | 12.4 |
Willing to participate | 10 | 3.3 | 0 | 0.0 | 0 | 0.0 |
Total | 301 | 547 | 605 |
Reason for call | Spring 2017 (Panel 22 Round 1, Panel 21 Round 3, Panel 20 Round 5) | Fall 2017 (Panel 22 Round 2, Panel 21 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 10 | 2.9 | 51 | 9.6 | 35 | 6.8 |
Appointment | 86 | 24.9 | 355 | 66.6 | 318 | 61.4 |
Request callback | 59 | 17.1 | 90 | 16.9 | 64 | 12.4 |
No message | 1 | 0.3 | 2 | 0.4 | 5 | 1.0 |
Other | 2 | 0.6 | 3 | 0.6 | 4 | 0.8 |
Proxy needed | 1 | 0.3 | 7 | 1.3 | 5 | 1.0 |
Request SAQ help | 1 | 0.3 | 0 | 0.0 | 15 | 2.9 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 0 | 0.0 | 1 | 0.2 | 1 | 0.2 |
Refusal | 172 | 49.7 | 23 | 4.3 | 70 | 13.5 |
Willing to participate | 14 | 4.0 | 1 | 0.2 | 1 | 0.2 |
Total | 346 | 533 | 518 |
Reason for call | Spring 2018 (Panel 23 Round 1, Panel 22 Round 3, Panel 21 Round 5) | Fall 2018 (Panel 23 Round 2, Panel 22 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 5 | 1.3 | 37 | 7.9 | 38 | 7.3 |
Appointment | 59 | 15.4 | 318 | 68.1 | 335 | 63.9 |
Request callback | 50 | 13.1 | 50 | 10.7 | 60 | 11.5 |
No message | 4 | 1.0 | 5 | 1.1 | 1 | 0.2 |
Other | 0 | 0.0 | 1 | 0.2 | 3 | 0.6 |
Proxy needed | 2 | 0.5 | 4 | 0.9 | 6 | 1.1 |
Request SAQ help | 0 | 0.0 | 1 | 0.2 | 15 | 2.9 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 1 | 0.3 | 0 | 0.0 | 0 | 0.0 |
Refusal | 211 | 55.1 | 46 | 9.9 | 61 | 11.6 |
Willing to participate | 51 | 13.3 | 5 | 1.1 | 5 | 1.0 |
Total | 383 | 467 | 524 |
Reason for call | Spring 2019 (Panel 24 Round 1, Panel 23 Round 3, Panel 22 Round 5) | Fall 2019 (Panel 24 Round 2, Panel 23 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 5 | 1.5 | 36 | 7.4 | 30 | 5.6 |
Appointment | 59 | 17.2 | 328 | 67.5 | 344 | 64.8 |
Request callback | 39 | 11.4 | 56 | 11.5 | 56 | 10.5 |
No message | 2 | 0.6 | 4 | 0.8 | 7 | 1.3 |
Other | 2 | 0.6 | 4 | 0.8 | 0 | 0.0 |
Proxy needed | 2 | 0.6 | 6 | 1.2 | 11 | 2.1 |
Request SAQ help | 0 | 0.0 | 2 | 0.4 | 5 | 0.9 |
SAQ refusal | 0 | 0.0 | 48 | 9.9 | 0 | 0.0 |
Special needs | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Refusal | 185 | 53.9 | 0 | 0.0 | 78 | 14.7 |
Willing to participate | 49 | 14.3 | 2 | 0.4 | 0 | 0.0 |
Total | 353 | 486 | 531 |
Reason for call | Spring 2020 (Panel 25 Round 1, Panel 24 Round 3, Panel 23 Round 5) | Fall 2020 (Panel 25 Round 2, Panel 24 Round 4, Panel 23 Round 6) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2, 4, and 6 | ||||
N | % | N | % | N | % | |
Address/telephone change | 5 | 0.9 | 37 | 6.3 | 28 | 2.4 |
Appointment | 142 | 24.2 | 332 | 56.1 | 278 | 23.9 |
Request callback | 102 | 17.4 | 121 | 20.4 | 276 | 23.7 |
No message | 22 | 3.8 | 18 | 3.0 | 60 | 5.2 |
Other | 2 | 0.3 | 5 | 0.8 | 5 | 0.4 |
Proxy needed | 6 | 1.0 | 3 | 0.5 | 10 | 0.9 |
Request SAQ help | 0 | 0.0 | 1 | 0.2 | 35 | 3.0 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Special needs | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Refusal | 209 | 35.7 | 62 | 10.5 | 203 | 17.5 |
Willing to participate | 98 | 16.7 | 13 | 2.2 | 266 | 22.9 |
Total | 586 | 592 | 1,163 |
Reason for call | Spring 2021 (Panel 26 Round 1, Panel 25 Round 3, Panel 24 Round 5, Panel 23 Round 7) | Fall 2021 (Panel 26 Round 2, Panel 25 Round 4, Panel 24 Round 6, Panel 23 Round 8) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3, 5, 7 | Rounds 2, 4, 6, 8 | ||||
N | % | N | % | N | % | |
Address/telephone change | 2 | 0.6 | 19 | 3.4 | 59 | 7.0 |
Appointment | 27 | 8.1 | 76 | 13.7 | 233 | 27.5 |
Request callback | 101 | 30.1 | 240 | 43.2 | 287 | 33.8 |
No message | 34 | 10.1 | 21 | 3.8 | 41 | 4.8 |
Other | 8 | 2.4 | 48 | 8.6 | 8 | 0.9 |
Proxy needed | 0 | 0.0 | 7 | 1.3 | 13 | 1.5 |
Request SAQ help | 3 | 0.9 | 17 | 3.1 | 15 | 1.8 |
SAQ refusal | 0 | 0.0 | 1 | 0.2 | 0 | 0.0 |
Special needs | 0 | 0.0 | 2 | 0.4 | 1 | 0.1 |
Refusal | 87 | 26.0 | 87 | 15.7 | 176 | 20.8 |
Willing to participate | 73 | 21.8 | 37 | 6.7 | 15 | 1.8 |
Total | 335 | 555 | 848 |
Date | Description |
---|---|
1/4/2021 | DOCM0692.01: Delivery of the 2021 NPI Provider Directory from the Panel 26 MEPS Laptop |
1/4/2021 | PCND016101: 2019 Person-Level Priority Conditions Cross-Tabulations |
1/4/2021 | PRPL0151.01: Output from 2019 PRPL Program #1 |
1/4/2021 | UEGN 2841.01: 2019 Specifications to apply expenditure allocation scheme to a provider-reported lump sum payment |
1/4/2021 | UEGN 2815.01: 2019 Specs for initializing MPSAMTs |
1/4/2021 | UEGN3591.02: Deliver to AHRQ for approval variable lists for the PUF non-MPC (DN, OM, and HH) Expenditure Event files (Completed 01/15/21) |
1/5/2021 | UEGN3595.01: The 2019 Utilization Standard Error Benchmarking Tables Using Person Use PUF Weights - PERWT19P |
1/6/2021 | GNRL4026.02 and GNRL4027.02: Delivery of End-Of-Round files (Person-Level and RU Level) -P25R1 Spring & Fall |
1/6/2021 | GNRL3042.01: List of CAPI Supplemental Sections and Round-Specific Forms |
1/6/2021 | UEGN 2818.01: 2019 Specifications for Last Step Edits |
1/8/2021 | ADMN0919.01: Delivery of 2019 FAMID Variables and CPS Family Identifier |
1/8/2021 | GNRL4026.03 and GNRL4026.04: Delivery of End-Of-Round Person-Level Files - P24R4 & P25R2 |
1/8/2021 | GNRL4027.03 and GNRL4027.04: Delivery of End-Of-Round RU-Level Files - P24R4 & P25R2 |
1/8/2021 | GNRL4028.02: Delivery of the EVNT, PMED, EPCP, EPRS and HOME Tables for Fall 2020 |
1/11/2021 | DOCM0689.02: Delivery of the 2020 MPC files for Sample selection - Wave 1 |
1/11/2021 | DOCM0690.02: Delivery of the 2020 PC Sample file - Wave 1 |
1/11/2021 | DOCM0691.02: Delivery of the 2020 Provider file for NPI coding - Wave 1 |
1/11/2021 | UEGN3596.01: Westat’s Comments on the RTI’s FY2019 MPC Test Files |
1/12/2021 | GNRL3043.01: NCHS Checklist and FY 2019 Use PUF Preliminary Delivery Document |
1/12/2021 | GNRL3044.01: NCHS Checklist and Preliminary Version of the 2019 JOBS File Delivery Document for Review |
1/12/2021 | PRPL0149.01: Findings from Test Runs of Revised 2019 Program 3a |
1/13/2021 | GNRL4024.01: FY 2019 (Panel 23 and Panel 24) Snapshots of HC Source Tables Including the CONDX, JOBSX, SAQ, and DCS Tables |
1/13/2021 | UEGN 2818.03: 2019 Specifications for Last Step Edits |
1/14/2021 | PRPL0149.07: Findings from Test Runs of Revised 2019 Program 3a |
1/15/2021 | DEMO1017.02: Delivery of the Output Listings for Final Case Review of the MOPID and DAPID Variables’ Construction for FY2019 |
1/15/2021 | PRPL0152.01: FY19 PRPL Specifications for the OOPELIG, Imputation, and Final file creation programs |
1/15/2021 | UEPD1216.05: 2019 INSURC19 variable for use in the Prescribed Medicines Imputation |
1/15/2021 | UEGN2818.08: 2019 Specifications for Last Step Edits |
1/18/2021 | UEGN2813.01: 2019 Specs for Mom-baby Linking; and UEGN 2814.01 2019 Post-edit Rollup Specs |
1/18/2021 | UEGN2819.01 2019 Specifications for household discount adjustment |
1/18/2021 | UEGN2821.01: 2019 Specs for preparing SBDs for editing (PREPSBD) |
1/18/2021 | UEGN2822.01: 2019 Specs for attaching SBDs to MPC events (SBDATTCH) |
1/19/2021 | PRPL0149.09: Findings from Test Runs of Revised 2019 Program 3a |
1/20/2021 | GNRL3045.01: Preliminary Version of the 2019 JOBS File Codebook and Delivery Document for AHRQ and NCHS Review |
1/20/2021 | GNRL3046.01: Preliminary Versions of the Codebook and Delivery Document of the FY 2019 Use PUF for Use in AHRQ and NCHS Review |
1/20/2021 | PRPL0149.13: Findings from Test Runs of Revised 2019 Program 3a |
1/21/2021 | PRPL0149.24: Findings from Test Runs of Revised 2019 Program 3a |
1/22/2021 | PRPL0149.27: Findings from Test Runs of Revised 2019 Program 3a |
1/25/2021 | CODE0922.02: Redelivery of NAICS and SOCS Files with Variable DUPERSID Added |
1/26/2021 | GNRL3045.02: Final Version of the 2019 JOBS File Codebook and Delivery Document for AHRQ and NCHS Review |
1/26/2021 | GNRL3046.02: Final Versions of the Codebook and Delivery Document of the FY 2019 Use PUF for Use in AHRQ and NCHS Review |
1/28/2021 | GNRL3046.03: Final Versions of the Updated Codebook and Delivery Document of the FY 2019 Use PUF for Use in AHRQ and NCHS Review |
1/29/2021 | UEGN3598.01: The FY2020 Design Change Memo for the UEGN Group |
2/1/2021 | ADMN0920.01: FY20 Design changes for ADMN/DEMO |
2/1/2021 | EMPL2236.01: Summary of the MEPS Household Component CAPI for FY2020 (P23 R5-7, P24 R3-5, P25 R1-3) and Potential Effect on 2020 Data Delivery Content � EMPLOYMENT |
2/2/2021 | PRPL0153.01: Output and Frequencies from 2019 PRPL Program #3a for P2419 |
2/2/2021 | WGTS5024.01: Delivery of MSA variables for FY13 |
2/3/2021 | ADMN0921.01: FY20 Basic Edit Specs |
2/3/2021 | DEMO1017.03: Delivery of the MOPID and DAPID Variables for FY2019 |
2/4/2021 | PRPL0152.04: FY19 PRPL Specifications for the OOPELIG, Imputation, and Final file creation programs |
2/4/2021 | PRPL0154.01: Output and Frequencies from 2019 PRPL Program #3a for P2319 |
2/5/2021 | GNRL4031.01: Delivery of the NHIS Nonresponse Pilot Data |
2/8/2021 | UEGN3598.02: The FY2020 Design Change Memo for the UEGN Group |
2/8/2021 | UEGN3594.02: Deliver to AHRQ for approval variable list for the PUF MPC (OP, ER, OB and IP) Expenditure Event files (Completed 02/22/21) |
2/9/2021 | UEGN 2858.01: 2019 MPC provider-reported low charge events |
2/10/2021 | UEGN 2858.09: 2019 MPC provider-reported low charge events |
2/10/2021 | UEGN 2858.12: 2019 MPC provider-reported low charge events |
2/11/2021 | PRPL0153.10: Output and Frequencies from 2019 PRPL Program #3a for P2419 |
2/11/2021 | PRPL0153.11: Output and Frequencies from 2019 PRPL Program #3a for P2419 |
2/11/2021 | WGTS2010.01: Panel 24 Full Year 2019: Derivation of Eligibility and Response Indicators for the CPS-like Families |
2/12/2021 | GNRL3047.01: HC-212: Delivery of the Full Year 2019 Use PUF for Web Release |
2/12/2021 | PRPL0153.17: Output and Frequencies from 2019 PRPL Program #3a for P2419 |
2/15/2021 | DOCM0689.03: Revoking AF authorizations from the wave 1 of 2020 sample delivery |
2/16/2021 | GNRL3048.01: HC-211: 2019 Jobs Public Use File Delivery for Web Release |
2/16/2021 | HLTH1055.01: Full- Year 2020 HLTH Basic Edit Specifications |
2/17/2021 | CODE0928.01: PRODDESC Inconsistencies in the MDDB Master File |
2/18/2021 | ACCS0193.01: Summary of the MEPS Household Component CAPI for FY2021 and Potential Effect on 2021 Data Delivery Content � ACCESS TO CARE |
2/18/2021 | PRPL0155.01: Official Re-Run of Program #3a � Both Panels |
2/25/2021 | PRPL0157.01: Output and Frequencies from 2019 PRPL Program # 3b |
2/26/2021 | EMPL2237.01: Employment Person-Level Variable & Related Process Specifications for the Full Year 2020 Population Characteristics/Consolidated PUFs |
2/26/2021 | GNRL3049.01: FY 2019 Person-Level Consolidated PUF Variable List Changes for AHRQ Review |
2/26/2021 | UEGN 2859.01: 2019 MPC provider-reported payments from four or more sources |
2/26/2021 | WGTS5025.01: Delivery of preliminary version of the P25R1 final person weight |
3/1/2021 | PRPL0157.04: Output and Frequencies from 2019 PRPL Program # 3b |
3/2/2021 | UEGN2860.01: 2019 Benchmark Tables: Initial Delivery |
3/2/2021 | UEGN3599.01: The 2019 DN/HHP/OM/HHA Events Final Imputation Files |
3/3/2021 | PRPL0155.07: Official Re-Run of Program #3a � Both Panels |
3/3/2021 | PRPL0158.01: Output and Frequencies from 2019 PRPL Program #4 |
3/4/2021 | Memo UEPD1218.01 |
3/5/2021 | CODE0929.01: 2019 File of GEO Coded Addresses for the MEPS Master Files |
3/8/2021 | EMPL2238.01: Preload Insurance Status where Partial Coverage is Reported in Prior Round |
3/9/2021 | PRPL0159.01: FY2019 COVRUNOS = 91 & PHLDRCHNG = -5 Editing Decisions |
3/10/2021 | COND0989.01: FY 2019 Preliminary Conditions File Preview |
3/10/2021 | UEGN2860.02: 2019 Benchmark Tables: Second Delivery |
3/10/2021 | UEGN3599.02: The 2019 MVN Final Imputation File |
3/12/2021 | PRPL0158.04: Output and Frequencies from 2019 PRPL Program #4 |
3/15/2021 | DSDY0064.01: Delivery of the Specifications for the FY 2020 DSDY variables |
3/15/2021 | EMPL2237.08: Employment Person-Level Variable & Related Process Specifications for the Full Year 2020 Population Characteristics/Consolidated PUFs |
3/16/2021 | EMPL2237.11: Employment Person-Level Variable & Related Process Specifications for the Full Year 2020 Population Characteristics/Consolidated PUFs |
3/17/2021 | ACCS0194.01: 2020 ACCS and COVID Constructed Variable Specifications |
3/17/2021 | EMPL2237.14: Employment Person-Level Variable & Related Process Specifications for the Full Year 2020 Population Characteristics/Consolidated PUFs |
3/18/2021 | EMPL2237.18: memo comments |
3/19/2021 | EMPL2237.21: Employment Person-Level Variable & Related Process Specifications for the Full Year 2020 Population Characteristics/Consolidated PUFs |
3/25/2021 | WGTS2012.01: FY2019 combined Panels expenditure person weight review output |
3/29/2021 | DSDY0065.01: FY 2020 Disability Days Basic Edit Specifications |
3/30/2021 | PCND0162.01: 2020 PCND Constructed Variable Specifications |
4/4/2021 | ADMN0922.01: FY20 Constructed Variable Specs |
4/5/2021 | EMPL2239.01: Full Year 2020 Employment Source Variable Editing Specifications |
4/5/2021 | WGTS5026.01: Delivery of the FY 2019 Expenditure File Original Person Weight |
4/6/2021 | UEGN2860.03: 2019 Benchmark Tables: Third Delivery |
4/6/2021 | UEGN3599.03: The 2019 Final Imputation Files: ER, HS, MVE, OP and SBD |
4/7/2021 | CODE0931.01: FY2019 Census Files -- Uncoded and Coded |
4/8/2021 | DOCM0689.04: Delivery of the 2020 MPC files for Sample selection - Wave 2 |
4/8/2021 | DOCM0690.03: Delivery of the 2020 PC Sample file - Wave 2 |
4/8/2021 | DOCM0691.03: Delivery of the 2020 Provider file for NPI coding - Wave 2 |
4/8/2021 | EMPL2240.01: Delivery of 2019 Covered Person Records for Employment Variable Imputation |
4/8/2021 | PRPL0160.01: Delivery of the FY 2019 OOPELIG2 Dataset for Approval |
4/9/2021 | DSDY0065.03: FY 2020 Disability Days Basic Edit Specifications |
4/12/2021 | DSDY0065.04: FY 2020 Disability Days Basic Edit Specifications |
4/12/2021 | HLTH1057.06: Full-Year 2020 SDOH Basic Edit Specifications |
4/13/2021 | CODE0931.02: FY2019 Census Files -- Uncoded and Coded � Redelivery |
4/13/2021 | EMPL2241.01: Comments and “Other Specify” Values Regarding COVID-19 |
4/13/2021 | GNRL3050.01: NCHS Checklist and Preliminary Version of the 2019 Conditions File Delivery Document and Recode Materials for Review |
4/13/2021 | GNRL3052.01: NCHS Checklists and Preliminary Versions of Documents for the FY 2019 Non-MPC Event (DV, OM, and HH) PUFs |
4/13/2021 | GNRL4037.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 22 Round 4 |
4/13/2021 | GNRL4038.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 22 Round 5 |
4/13/2021 | HLTH1058.01: Full-Year 2020 HLTH Constructed Variable Specifications |
4/14/2021 | GNRL4036.01: AHRQ Confidentiality reports 2021 |
4/15/2021 | GNRL4039.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 23 Round 1 |
4/15/2021 | GNRL4040.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 23 Round 2 |
4/15/2021 | PCND0162.04: 2020 PCND Constructed Variable Specifications |
4/16/2021 | GNRL4035.01: Delivery of the File Containing Variables Recoded or Dropped from the USE PUF Due to DRB Review � P23/P24 |
4/16/2021 | GNRL4041.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 23 Round 3 |
4/16/2021 | GNRL4042.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 23 Round 4 |
4/19/2021 | UEGN3595.02: The 2019 Utilization Standard Error Benchmarking Tables Using the Person-level Poverty-Adjusted Weight- PERWT19F |
4/19/2021 | GNRL4043.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 23 Round 5 |
4/19/2021 | GNRL4044.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 24 Round 1 |
4/20/2021 | CODE0934.01: The updated Specifications for the FY 2020 GEO Coded Address File |
4/20/2021 | GNRL4045.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 24 Round 2 |
4/20/2021 | GNRL4046.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 24 Round 3 |
4/20/2021 | UEPD1220.01: Delivery of 2020 PMED Basic Edits Spec |
4/20/2021 | UEGN3600.01: The FY2020 UEGN Basic Edit Specifications - P23/P24/P25 |
4/21/2021 | GNRL3051.01: FY 2019 Preliminary Conditions File, Codebook, and Delivery Document |
4/21/2021 | GNRL3053.01: Preliminary Versions of the 2019 Non-MPC Event (DV, OM, and HH) PUF Codebooks and Final Documents for Use in AHRQ and NCHS Review |
4/22/2021 | GNRL4047.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 23 Round 6 |
4/22/2021 | GNRL4048.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 24 Round 4 |
4/22/2021 | PCND0162.06: 2020 PCND Constructed Variable Specifications |
4/22/2021 | PRPL0161.05: Delivery of the FY 2019 PRPL Hot Deck Imputation Results for Approval |
4/23/2021 | GNRL3054.01: 2019 Preliminary Non-MPC Event (DV, OM, and HH) PUF Data Sets |
4/23/2021 | GNRL4050.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 25 Round 1F |
4/23/2021 | UEPD1220.04: Delivery of 2020 PMED Basic Edits Spec |
4/26/2021 | GNRL4049.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 25 Round 1 |
4/26/2021 | GNRL4051.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 25 Round 2 |
4/27/2021 | GNRL3053.02: Final Versions of the 2019 Non-MPC Event (DV, OM, and HH) PUF Codebooks and Final Documents for Use in AHRQ and NCHS Review |
4/29/2021 | COND0987.17: Applying FY18 Masking Rules to FY16 and FY17 Preliminary Conditions Data � Redelivery |
4/30/2021 | WGTS2014.01: Full Year 2019 Panel 23 SAQ Expenditure person weight review output |
4/30/2021 | WGTS2015.01: Full Year 2019 Panel 24 SAQ Expenditure person weight review output |
5/3/2021 | HLTH1058.04: Full-Year 2020 HLTH Constructed Variable Specifications |
5/3/2021 | PCND0163.01: 2020 PCND Basic Edit Specifications |
5/4/2021 | COND0990.01: Delivery: 2020 Conditions Basic Edit Specifications |
5/5/2021 | HLTH1059.01: Full-Year 2020 SDOH Constructed Variable Specifications |
5/6/2021 | ACCS0195.01: 2020 ACCS and COVID Basic Edit Specifications |
5/7/2021 | COND0991.01: 2019 Conditions PUF Specifications |
5/11/2021 | GNRL3055.01: NCHS Checklists and Preliminary Versions of Documents for the FY 2019 MPC Event (IP, ER, OP, OB) PUFs |
5/11/2021 | UEGN2863.01: 2019 Predictive Mean Matching Imputation Method Applied to the Expenditure Imputation of the non-MPC Event Types |
5/12/2021 | COND0990.04: Delivery: 2020 Conditions Basic Edit Specifications |
5/12/2021 | HLTH1060.01: Full-Year 2020 SDOH Web-Based Response Rates |
5/12/2021 | HLTH1060.03: Full-Year 2020 SDOH Web-Based Response Rates |
5/12/2021 | UEGN2864.01: 2019 Predictive Mean Matching Imputation Method Applied to the Expenditure Imputation of the MPC Event Types |
5/13/2021 | HLTH1060.09: Full-Year 2020 SDOH Web-Based Response Rates |
5/13/2021 | UEPD1221.02: Delivery of the 2019 PMED PUF (RX19V01 and RX19V02) |
5/13/2021 | UEPD1221.03: Delivery of 2019 PMED PUF (TC19XTABS.lst, TC19XTABS.xml) |
5/14/2021 | GNRL3056.01: HC-213b, HC-213c, and HC-213h: 2019 Expenditure Event PUFs for Non-MPC Event Types (DV, OM, and HH) and All Related Files for Web Release |
5/17/2021 | WGTS2017.01: FY2019 Consolidated PUF Family Weights review output |
5/18/2021 | WGTS2020.01: Full Year 2019 combined Panels SAQ expenditure person weight for the Consolidated PUF review output |
5/19/2021 | COND0992.01: FY 2019 Preliminary CLNK File |
5/19/2021 | GNRL3057.01: Preliminary Versions of the 2019 MPC Event (IP, ER, OP, OB) PUF Codebooks and Documents for Use in AHRQ and NCHS Review |
5/20/2021 | GNRL3058.01: Preliminary Versions of the 2019 MPC Event (IP, ER, OP, OB) PUF Data Sets |
5/20/2021 | WGTS2010.02: Panel 24 Full Year 2019: Derivation of Eligibility and Response Indicators for the CPS-like Families |
5/20/2021 | WGTS2018.01: FY2019 individual Panel expenditure person weights review output |
5/20/2021 | WGTS1990.01: Deriving Location Variables (Region and MSA) for Panels 23 and 24, Full Year 2019, based on GEO FIPS Codes, using OMB MSA definitions of both Year 2019 and the Current (2020) Year |
5/20/2021 | WGTS2016.01: Creation of CPS Control Total Files Containing the Raking Dimensions for the Full Year 2019 Self-Administered Questionnaire (SAQ) Expenditure Person Weight |
5/21/2021 | UEPD1221.04: Delivery of 2019 PMED PUF: question regarding PRODUCT NAMEs indicate stores |
5/27/21 | COND0993.01: FY 2016/17 COND Re-Release Masking steps 1-5 |
5/28/2021 | UEPD1221.05: Delivery of the 2019 PMED PUF (RX19V05.PDF, RX19V06.PDF, RX19V05X.PDF, TOP10RX19_USE.PDF, TOP10TC19_USE.PDF, TOP10TC19_EXP.PDF, TOP25RX19_EXP.PDF) |
5/28/2021 | WGTS2021.01: FY2019 DCS Expenditure weight review output |
5/28/2021 | WGTS5027.01: Delivery of the Individual Panel Raked Person Weights for P23/P24 FY19 |
6/1/2021 | WGTS5028.01: Delivery of FY19 Veteran Self-Administered Questionnaire Weight, VSAQW19F, for Expenditure Files |
6/4/2021 | PRPL0162.01: Delivery of the FY 2019 OOPELIG3 Dataset, Benchmarking results, POSTIMPFIN results for final approval of OOPPREM variables, the Preliminary Encrypted Delivery Dataset, and the Preliminary Unencrypted Delivery Dataset |
6/4/2021 | UEPD1221.19: Delivery of 2019 PMED PUF (RX19V05X) SAS dataset and the format files (RX19V05X.sas7bcat, rx19v05xf.sas and rxexpf2.sas) |
6/4/2021 | WGTS5029.01: Delivery of the Individual Panel 23 and Panel 24 SAQ Expenditure Weight for FY2019 |
6/7/2021 | GNRL4024.02: Addendum to the FY 2019 (Panel 23 & Panel 24) Delivery Database Snapshots: Edited Segments since the Previous Delivery of 1/13/21 |
6/8/2021 | GNRL3056.02: HC-213b, HC-213c, and HC-213h: 2019 Expenditure Event PUFs for Non-MPC Event Types (DV, OM, and HH) and All Related Files for Web Release � Updated |
6/8/2021 | GNRL3059.01: NCHS Checklist and Preliminary Version of Delivery Document for the FY 2019 Prescribed Medicines (PMED) PUF |
6/8/2021 | UEPD1221.24: Redelivery of 2019 PMED PUF format files (RX19V05X.sas7bcat and rx19v05xf.sas) |
6/9/2021 | WGTS5030.01: Delivery of the Poverty-Adjusted Family-Level Weight, CPS-Like Family-Level Weight, Poverty-Adjusted DCS and SAQ Weights for FY2019 |
6/11/2021 | GNRL3060.01: HC-213d, HC-213e, HC-213f, and HC-213g: 2019 Expenditure Event PUFs for MPC Event Types (IP, ER, OP, and OB) and All Related Files for Web Release |
6/11/2021 | GNRL4056.01: Delivery of Panel 23 and Panel P24 EVNT, PMED, EPRS, and EPCP Tables |
6/11/2021 | PCND0164.01: 2019 Priority Conditions Benchmarking Table |
6/11/2021 | UEPD1221.26: Deliver the 2019 PMED PUF data (RX19V06.sas7bdat) and the format files ((RX19V06.sas7bcat, rxexpv06f.sas and rxexpv06f2.sas) |
6/14/2021 | WGTS5031.01: Delivery of the FY 2019 Expenditure File Final Person Weight � PERWT19F |
6/15/2021 | GNRL3061.01: FY2020 Person-Level Use PUF Variable List Changes for AHRQ Review |
6/16/2021 | GNRL3062.01: Preliminary Versions of the 2019 Prescribed Medicines (PMED) Event PUF Codebook and Delivery Document for Use in AHRQ and NCHS Review |
6/16/2021 | GNRL3063.01: Preliminary Version of the 2019 PMED Event PUF Data Set |
6/17/2021 | UEGN3601.01: Delivery of the Dropped Variables Due to DRB Review � FY19 EXP PUFs for ER, OP, OB, IP, DV and RX |
6/21/2021 | CODE0935.01: PMED Matching Programs LOG and LST Files for FY20 Wave 1 |
6/22/2021 | GNRL3062.02: Final Versions of the 2019 Prescribed Medicines (PMED) Event PUF Codebook and Delivery Document for Use in AHRQ and NCHS Review |
6/25/2021 | GNRL3061.02: FY2020 Person-Level Use PUF Variable List Changes for AHRQ Review |
6/25/2021 | WGTS5032.01: Delivery of the Alternative variance structure file for FY2019 |
6/28/2021 | GNRL4053.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 23 Round 7 |
6/28/2021 | GNRL4054.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 24 Round 5 |
6/28/2021 | GNRL4058.01 and GNRL4058.02: Delivery of End-Of-Round Person-Level Files - P23R7 & P24R5 |
6/28/2021 | GNRL4059.01 and GNRL4059.02: Delivery of End-Of-Round RU-Level Files - P23R7 & P24R5 |
7/7/2021 | GNRL3064.01: HC-213a: Delivery of the 2019 Prescribed Medicines (PMED) PUF and all Related Files for Web Release |
7/8/2021 | EMPL2242.01: Adjusting Panel 23 Round 5 and Round 6 Jobs Using Round 5 Interview Date |
7/9/2021 | GNRL4060.01: Delivery of Panel P25 Round 3 EVNT, PMED, EPRS, and EPCP Tables |
7/9/2021 | HLTH1061.01: FY 2018 SAQ Cross-tabs including recoded sex-specific variables |
7/12/2021 | UEGN3603.01: The Telehealth Visit Type Other Specify Text Strings Recoding for FY2020 |
7/13/2021 | GNRL3065.01: NCHS Checklist and Preliminary Version of the Delivery Document for the FY 2019 Consolidated Data PUF |
7/13/2021 | GNRL3066.01: NCHS Checklist and Preliminary Version of Delivery Document for the FY 2019 Person-Round-Plan (PRPL) PUF |
7/13/2021 | UEGN3602.01: The 2019/2018 QC Finding Tables of the PUF Event Expenditures |
7/14/2021 | EMPL2242.04: Adjusting Panel 23 Round 5 and Round 6 Jobs Using Round 5 Interview Date |
7/16/2021 | DOCM0690.04: Delivery of the 2020 PC Sample file - Wave 3 |
7/16/2021 | DOCM0691.04: Delivery of the 2020 Provider file for NPI coding - Wave 3 |
7/16/2021 | GNRL3067.01: HC216: Preliminary Version of the 2019 Consolidated File |
7/19/2021 | EMPL2242.08: Adjusting Panel 23 Round 5 and Round 6 Jobs Using Round 5 Interview Date |
7/21/2021 | GNRL3068.01: Preliminary Version of the 2019 Appendix to the Event PUFs Delivery Document, and Codebooks for Review |
7/21/2021 | GNRL3069.01: FY 2019 Conditions PUF Preliminary Versions of Codebook and Delivery Document for Use in AHRQ Review |
7/21/2021 | GNRL3070.01: FY 2019 Person-Round-Plan PUF Preliminary Versions of Codebook and Delivery Document for Use in AHRQ and NCHS Review |
7/21/2021 | GNRL3071.01: Preliminary versions of the Codebook and Document for the FY 2019 Consolidated Data PUF for Use in AHRQ and NCHS Review |
7/21/2021 | GNRL3072.01: Preliminary Version of the 2019 Person-Round-Plan (PRPL) PUF Data Set |
7/21/2021 | GNRL3073.01: HC214: Preliminary Version of the 2019 Conditions Data Set |
7/21/2021 | GNRL3074.01: HC213I: Preliminary Versions of the 2019 Appendix to the Event PUFs Data Sets |
7/23/2021 | UEGN3604.01: The FY2020 Initial Variable Construction Specifications |
7/26/2021 | COND0994.01: 2020 Preliminary Conditions File Specifications |
7/26/2021 | EMPL2243.01: Employment Coverage Reported Subsequent to Direct Purchase Coverage � Identical Establishment and Policyholder |
7/27/2021 | GNRL3070.02: FY 2019 Person-Round-Plan PUF Final Versions of Codebook and Delivery Document |
7/27/2021 | GNRL3071.02: Final Versions of the Codebook and Delivery Document for the FY 2019 Consolidated Data PUF |
7/27/2021 | GNRL3075.01: Final Versions of the 2019 Conditions PUF Codebook and Delivery Document for AHRQ Review |
7/27/2021 | GNRL3076.01: Final Versions of the 2019 Appendix to the Event Files PUF Codebooks and Delivery Document for AHRQ Review |
7/28/2021 | DEMO1018.01: FY20 MOPID and DAPID Variable Construction Plan |
7/28/2021 | GNRL3071.03: Final Versions of the Codebook and Delivery Document for the FY 2019 Consolidated Data PUF |
7/28/2021 | GNRL4058.03 and GNRL4059.03: Delivery of End-Of-Round files (Person-Level and RU Level) � P25R3 |
7/28/2021 | GNRL4062.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 25 Round 3 |
7/29/2021 | GNRL3070.03: FY 2019 Person-Round-Plan PUF Final Versions of Codebook and Delivery Document |
7/29/2021 | UEGN3605.01: The ICU indicator for Hospital events 2018-2019 |
7/30/2021 | HLTH1062.01: FY 2018 SAQ Constructed Variables Dataset |
8/2/2021 | COND0993.23: FY 2016/17 COND Re-Release Masking steps 1-5 |
8/2/2021 | FOOD0006.01: FY 2020 Food Security Basic Edit Specifications |
8/3/2021 | UEGN3606.01: The DN Text Strings Recoding for FY2020 |
8/6/2021 | DOCM0693.01: File of Provider Names for FY 2020 |
8/6/2021 | HLTH1063.01: Full-Year 2020 SDOH Hard-copy Response Rates |
8/9/2021 | CODE0936.01: MEPS Delivery of the ICD-10-CM/CCSR Crosswalk and COND Coding Uncodeable Text Strings for FY20 |
8/9/2021 | CODE0936.01: MEPS Delivery of the ICD-10-CM/CCSR Crosswalk and COND Coding Uncodeable Text Strings for FY20 |
8/10/2021 | EMPL2243.07: Employment Coverage Reported Subsequent to Direct Purchase Coverage � Identical Establishment and Policyholder |
8/12/2021 | EMPL2244.01: Adjusted 2020 Panel 23 Round 5 and Round 6 Population Characteristics Variable Specifications for Select Variables � Set 1 |
8/13/2021 | CODE0937.01: pmed coding report - week #1 |
8/13/2021 | GNRL3077.01: HC-213I: Delivery of the Final Appendix to the 2019 Event Files and all Related Files for Web Release |
8/13/2021 | GNRL3078.01: HC-214: Delivery of the Final 2019 Conditions File and All Related Files for Web Release |
8/13/2021 | GNRL3079.01: HC-216: Full Year 2019 Consolidated Use, Expense, and Insurance PUF Delivery for Web Release |
8/13/2021 | GNRL3080.01: HC-215: Delivery of the 2019 Person Round Plan (PRPL) PUF and Related Files for Web Release |
8/16/2021 | EMPL2244.02: Adjusted 2020 Panel 23 Round 5 and Round 6 Population Characteristics Variable Specifications for Select Variables � Set 2 |
8/16/2021 | HLTH1062.08: FY 2018 SAQ |
8/20/2021 | CODE0937.02: pmed coding report - week #2 |
8/26/2021 | GNRL4063.01: Delivery of the Single-Round Data Exchange (SRD) for Panel 26 Round 1 |
8/26/2021 | GNRL4058.04 and GNRL4059.04: Delivery of End-Of-Round files (RU level and Person level) - P26R1 |
8/26/2021 | WGTS 2017.01: New Weighting Memo #2017.01: Derivation of the 2019 Full Year Expenditure Family Weight, MEPS and CPS-Like, for Panel 23 and Panel 24 Combined |
8/30/2021 | CODE0937.03: pmed coding report - week #3 |
8/30/2021 | EMPL2244.03: Adjusted 2020 Panel 23 Round 5 and Round 6 Population Characteristics Variable Specifications for Select Variables � Set 3 |
8/31/2021 | DOCM0694.01: MEPS � 2020 Conditions Authority File After the 2020 HC Condition Coding |
8/31/2021 | UEGN3607.01: Specifications for the 2020 Pre-Imputation UEGN Files |
9/1/2021 | HINS1335.01: Editing Panel 23 Rounds 6 and 7 � Correcting Non-Reviewed Coverage |
9/2/2021 | COND0994.04: 2020 Preliminary Conditions File Specifications |
9/2/2021 | HINS1335.05: Editing Panel 23 Rounds 6 and 7 � Correcting Non-Reviewed Coverage |
9/10/2021 | CODE0937.05: pmed coding report - week #5 |
9/10/2021 | COND0993.02: FY 2016/17 COND Re-Release Masking steps 1-5 |
9/10/2021 | HINS1336.01: Delivery of the P2520 EPCP Cross-tabs, with additional requested tables |
9/10/2021 | HLTH1062.02: Redelivery: FY 2018 SAQ Constructed Variables Dataset |
9/13/2021 | EMPL2244.04: Adjusted 2020 Panel 23 Round 5 and Round 6 Population Characteristic Variable Specifications for Select Variables � Set 4 |
9/14/2021 | DEMO1018.02: FY20 MOPID and DAPID related variables delivery plan |
9/14/2021 | EMPL2245.01: FY2020 JOBS File Specifications for Approval |
9/14/2021 | UEGN3607.02: Deliver updated specifications for the 2020 Pre-Imputation UEGN Files |
9/14/2021 | UEGN3568.01: Delivery of the 2020 Utilization Count Variables Construction Specification |
9/15/2021 | EMPL2244.05: Adjusted 2020 Panel 23 Round 5 and Round 6 Population Characteristic Variable Specifications for Select Variables � Set 5 |
9/15/2021 | UEPD1222.01: Delivery of 2020 PMED Pre-imp files spec |
9/15/2021 | WGTS2020.01: P23P24 FY2019 Person-level SAQ Expenditure Weights |
9/15/2021 | WGTS2021.01: Developing Sample Weights for the MEPS Diabetes Questionnaire Component (DCS) for the Panels 23 and 24 Full Year 2019 Expenditure File (PUF) |
9/16/2021 | COND0994.07: 2020 Preliminary Conditions File Specifications |
9/20/2021 | COND0994.10: 2020 Preliminary Conditions File Specifications |
9/20/2021 | UEGN 2871.01: 2020 HHA Duplicate Rollups |
9/21/2021 | PRPL0163.01: Full Year 2020 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing |
9/24/2021 | CODE0938.01: Delivery of the Coded FY2020 Industry and Occupation Files |
9/24/2021 | CODE0939.01: Delivery of the Specially Coded FY2009 Industry and Occupation Files |
9/27/2021 | EMPL2244.15: Adjusted 2020 Panel 23 Round 5 and Round 6 Population Characteristic Variable Specifications for Select Variables � Set 5 |
9/27/2021 | PRPL0163.05: Full Year 2020 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing |
9/27/2021 | PRPL0163.01_AHRQ_sn: Full Year 2020 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing |
10/1/2021 | CODE0940.01: MEPS 2020 Delivery of PMED Final Reports for Uncodeable, Compounds, Foreign Meds, No-MDDB, Drug Groupings |
10/1/2021 | UEGN 2867.01: 2020 Mom-Baby Linking Specs |
10/4/2021 | DOCM0695.01: Delivery of 2020 Static Tables for SOP After the 2020 HC SOP Coding |
10/4/2021 | EMPL2244.05: Adjusted 2020 Panel 23 Round 5 and Round 6 Population Characteristic Variable Specifications for Select Variables - Set 5 |
10/6/2021 | HINS 1337.01, HINS 1338.01: Delivery of the P2320 and P2420 EPCP Cross-tabs, with additional requested tables |
10/12/2021 | COND0993.03: FY 2016/17 COND Re-Release Masking steps 1-5 |
10/12/2021 | HLTH1064.01: Delivery of FY19 VSAQ Disability Variables |
10/14/2021 | HINS1339.01: HINS - Resource Heavy Tasks, 2021 data |
10/14/2021 | WGTS5033.01: Delivery of the ADMN/DEMO Variables Used for Weights Development for FY20 (P23, P24, and P25) |
10/15/2021 | CODE0941.01: Special Delivery of FY2019 Conditions Data for AHRQ’s Internal Analysis |
10/15/2021 | DOCM0697.01: Delivery of 2020 Static Tables for SRCS After the 2020 HC SRCS Coding |
10/18/2021 | DOCM0696.01: Delivery of the 2020 MPC Pre-Matching Household Component Production File |
10/25/2021 | COND0995.01: FY 2016/17 COND Re-Release Masking steps 6 & 7 |
10/29/2021 | CODE0942.01: MEPS 2020 Delivery of Authority File after PMED Coding and Files for Matching Programs |
10/29/2021 | CODE0943.01: Delivery of 2020 Static Table for WHOBILL - After the 2020 HC WHOBILL Coding |
11/2/2021 | EMPL2245.02: FY 2020 Wage Imputation Specification � Review and Approval Requested |
11/3/2021 | EMPL2246.01: Wage Outlier Editing Flag |
11/4/2021 | DOCM0699.01: Delivery of Person-Level Base and Family Pseudo Weight for FY20 |
11/4/2021 | WGTS5034.01: Delivery of Person-Level Base Weight, Individual Panel Base Weight, Family Membership Flag, and MSA variables for FY20 (P23, P24, and P25) |
11/5/2021 | CODE0943.02: Redelivery of 2020 Static Table for WHOBILL - After the 2020 HC WHOBILL Coding |
11/5/2021 | DOCM0698.01: MEPS - Data Destruction - NHIS 2017 Sample Files |
11/5/2021 | DOCM0697.02: Redelivery of 2020 Static Tables for SRCS After the 2020 HC SRCS Coding |
11/5/2021 | DOCM0695.02: Redelivery of 2020 Static Tables for SOP After the 2020 HC SOP Coding |
11/9/2021 | EMPL2247.01: Approval of Weighted NUMEMP Medians for Panel 23 Round 5-7, Panel 24 Round 3-5 and Panel 25 Round 1-3 of FY 2020 |
11/9/2021 | UEGN3610.01: The Decision for the HS Events Deletion |
11/10/2021 | UEGN3610.03: The Decision for the HS Events Deletion |
11/12/2021 | COND0996.01: MEPS FY20 CLNK and RXLK specs |
11/15/2021 | WGTS2030.01: MEPS Computation of the Person and Family Poststratification Control Totals for December 2020 from the March 2021 CPS (including the poverty level variable) |
11/16/2021 | HINS1340.01: HINS Panel 25 Rounds 1-3 At Any Time/At Interview Date/At 12/31/20 Variables |
11/16/2021 | WGTS2030.01: March 2021 CPS (ASEC) estimates and December 2020 control totals output, digital delivery |
11/17/2021 | GNRL3081.01: Full-Year 2019 CAPI Specifications in HTML Format for Web Release |
11/19/2021 | HINS1341.01: HINS Panel 23 Rounds 6-7 At Any Time/At Interview Date/At 12/31/20 Variables |
11/19/2021 | HINS1341.04: HINS Panel 23 Rounds 6-7 At Any Time/At Interview Date/At 12/31/20 Variables |
11/22/2021 | GNRL3035.02: HC-209: Full Year 2018 Consolidated Use, Expense, and Insurance PUF Delivery for Web Release � Updated |
11/22/2021 | PRPL0164.01: FY20 PRPL Specifications Coverage Record and HMO Variables and Variable Editing: Post JOBS Linking |
11/22/2021 | UEGN3611.01: Deliver to AHRQ for approval specifications for the non-MPC (DN, OM, and HH) Expenditure Event files |
11/23/2021 | FOOD0007.01: FY2020 Food Security PUF Constructed Variables and Labels |
11/23/2021 | HINS1342.01: HINS Panel 24 Rounds 3-5 At Any Time/At Interview Date/At 12/31/20 Variables |
11/23/2021 | UEPD1222.02: 2020 (Panel 23 & 24 & 25) Household Prescribed Medicine and Associated Files - Set 1 |
11/23/2021 | WGTS2029.01: MEPS Computation of the Person and Family Poststratification Control Totals for March 2021 from the March 2021 CPS (including the poverty level variable). |
11/29/2021 | ADMN0923.01: Weighted Cross-tabs delivery of ADMN and DEMO variables |
11/29/2021 | COND0996.05: MEPS FY20 CLNK and RXLK specs |
11/29/2021 | EMPL2245.07: FY 2020 Wage Imputation Specification � Review and Approval Requested |
11/29/2021 | UEGN#s 2873.01, 2874.01 and 2874.01; A 2020 Specs for Total Charge and Payment Imputation Class Variables |
12/1/2021 | EMPL2249.01: Recommendation for Resetting Select Panel 23 Adjusted EMPST31 Records |
12/1/2021 | GNRL3082.01: Full-Year 2019 CAPI Help Text in HTML Format for Web Release |
12/1/2021 | PRPL0164.04: FY20 PRPL Specification JOBS Link and Variable Editing |
12/1/2021 | UEGN#s 2876.01, 2876.01A, 2877.01 and 2877.01A: 2020 Specifications for Processing Flat-Fee Bundles and Creating an ER-HS Link on Unmatched HC Events |
12/1/2021 | WGTS2028.01: Panel 23 Full Year 2020 Use person weight digital review output |
12/2/2021 | HINS1343.01: Resource Heavy Tasks, 2021 data |
12/2/2021 | UEGN 2878.01: 2020 Listing of High Charge Events |
12/3/2021 | DSDY0066.01: FY 2020 Delivery of the DSDY “Missed Days” top code values for AHRQ approval |
12/3/2021 | HINS1344.01: Results of the QC Cross-Tabs for the HINS 2020/Gatekeeper FY variables |
12/6/2021 | DEMO1019.01: Delivery of the Output Listings for Case Review of the MOPID and DAPID Variables’ Construction for FY2020 |
12/7/2021 | PRPL0164.10: FY20 PRPL Specifications Coverage Record and HMO Variables and Variable Editing: Post JOBS Linking |
12/8/2021 | GNRL3061.03: FY2020 Person-Level Use PUF Variable List Changes for AHRQ Review |
12/8/2021 | PRPL0164.12: FY20 PRPL Specifications Coverage Record and HMO Variables and Variable Editing: Post JOBS Linking |
12/9/2021 | EMPL2251.01: Panel 23 EMPST31 - Cross-Year Comparisons and Setting Differences between Unadjusted data/spec and Adjusted data/spec |
12/9/2021 | UEGN3613.01: Delivery of the FY20 Pre-Imputation files |
12/10/2021 | DOCM0700.01: 2021 MPC sample file specs |
12/10/2021 | DOCM0701.01: 2021 PC sample file specs |
12/10/2021 | DOCM0702.01: 2021 provider file for NPI coding specs |
12/10/2021 | EMPL2250.01: Editing Extreme High Wage Outliers |
12/10/2021 | UEGN2880.01: 2020 Specifications for Initializing MPSAMTs |
12/11/2021 | EMPL2251.04: Panel 23 EMPST31 - Cross-Year Comparisons and Setting Differences between Unadjusted data/spec and Adjusted data/spec |
12/13/2021 | GNRL3083.01: Delivery of Data Reference Year PowerPoint Slide (2018 � 2020) |
12/13/2021 | UEGN3612.01: Delivery of the 2019 Post-Imputation Files for the MEPS Master Files |
12/13/2021 | UEGN 2868.01: 2020 Specifications for Expenditure Allocation of Provider-Reported Lump Sum Payment from 2 Sources |
12/14/2021 | EMPL2250.05: Editing Extreme High Wage Outliers |
12/14/2021 | EMPL2251.06: Panel 23 EMPST31 - Cross-Year Comparisons and Setting Differences between Unadjusted data/spec and Adjusted data/spec |
12/14/2021 | WGTS2026.01: Panel 25 Full Year 2020 Use person weight digital review output |
12/14/2021 | WGTS2012.02: Delivery of the FY2020 Variance strata from WGTS to DDG |
12/15/2021 | EMPL2251.09: Panel 23 EMPST31 - Cross-Year Comparisons and Setting Differences between Unadjusted data/spec and Adjusted data/spec |
12/15/2021 | UEGN2881.01: 2020 Specifications for MPC Rolling Event Edits |
12/15/2021 | WGTS5036.01: Delivery of the Variance Strata and PSU Variables for FY2020 |
12/16/2021 | DSDY0067.01: FY2020 Variable File Request |
12/16/2021 | UEPD1222.03: 2020 (Panel 23 & 24 & 25) PMED Supplemental File - Set 2: Person-Level File and Additional 3 Segment Variable Files |
12/16/2021 | WGTS2024.01: Panel 24 Full Year 2020 Use person weight digital review output |
12/17/2021 | UEGN2882.01: 2020 Specifications for HHA Rolling Event Edits |
12/20/2021 | UEGN2870.01: Specifications to Re-allocate Equally Distributed Imputed Expenditures |
12/20/2021 | WGTS2040.01: Full Year 2020 Nursing Home and Mortality adjustments for the Population Characteristics weights review output to AHRQ |
12/20/2021 | WGTS2026.02: Panel 25 Full Year 2020 Use person weight digital review output |
12/21/2021 | WGTS2044.01: Full Year 2020 Population Characteristics PUF, combined Panels person weights review output |
12/22/2021 | HINS1345.01: Delivery of the HINS Ever Insured in FY 2020 variables LASTAGE and INSCV920 to be added to the internal “MEPS Master Files” |
12/22/2021 | HINS1348.01: Delivery of the FY 2020 HINS Medicare Part D supplemental variables |
12/22/2021 | HLTH1066.01: 2020 BMI Cross-tabulations and Frequencies |
12/22/2021 | UEGN2872.01: 2020 Specifications for Preparing Prior-Year Donors |
12/22/2021 | UEGN2883.01: 2020 Specifications for Total Charge Imputation |
12/22/2021 | WGTS5037.01: Delivery of Person-Level Use PUF Weight, Single Panel Person Weight, and MSA20_13 Variables for FY20 |
12/23/2021 | GNRL3084.01: Delivery of Consolidated Documentation of Changes made to the MEPS Programming Specifications - Through 2020 - in Electronic Format |
12/23/2021 | HINS1346.01: Delivery of the 2020 HINS Month-by-Month, Tricare plan, Private, Medicare, and Medicaid HMO/Gatekeeper, and PMEDIN/DENTIN Variables |
12/23/2021 | HINS1347.01: Delivery of the 2020 HINS Building Block Variables and COVERM Tables for Panel 23 Rounds 6 � 7, Panel 24 Rounds 3 � 5, and Panel 25 Rounds 1 � 3 |
12/23/2021 | UEPD1222.04: 2020 (Panel 23 & 24 & 25) PMED Supplemental File - set 3: Person/Round-Level Files |
12/27/2021 | UEGN2869.01: 2020 Specifications for Post-PMM Expenditure Imputation |
12/27/2021 | UEGN2884.01: 2020 Specifications for Last Step Edits |
12/27/2021 | UEGN3615.01: Deliver to AHRQ for approval specifications for the MPC (OB, OP, ER, and IP) Expenditure Event files |
12/30/2021 | EMPL2246.02: Wage Outlier Editing Flag � revision and unspecified pattern |
12/31/2021 | EMPL2246.07: Wage Outlier Editing Flag � revision and unspecified pattern |