Deliverable Number: 20.102
Contract Number: HHSA-290-2016-00004I
June 15, 2021
Authors
Westat
Westat Reference Number: 2-7-634
Draft
Submitted to:
Agency for Healthcare Research and Quality
Center for Financing, Access, and Cost Trends
560 Fishers Lane
Rockville, MD 20850
Submitted by:
Westat
An Employee-Owned Research Corporation®
1600 Research Boulevard
Rockville, Maryland 20850-3129
(301) 251-1500
Introduction
1 Sample
1.1 Sample Composition
1.2 Sample Delivery and Processing
2 Instrument and Materials Design
2.1 Introduction
2.2 Changes to CAPI Instrument for 2020
2.3 Testing of the Questionnaire and Interviewer Management System
2.4 Changes to Materials and Procedures for 2020
3 Recruiting and Training
3.1 Field Interviewer Recruiting for 2020
3.2 2020 Interviewer Training
3.2.1 Experienced Interviewer Training
3.2.2 Continuing Education for All Interviewers
4 Data Collection
4.1 Data Collection Procedures
4.2 Data Collection Results: Interviewing
4.3 Data Collection Results: Authorization Form Signing Rates
4.4 Data Collection Results: Self-Administered Questionnaire (SAQ), Diabetes Care Supplement (DCS), and Collection Rates
4.5 Policy Booklet Data Collection: Methods and Results
4.6 Quality Control
4.7 Security Incidents
5 Home Office Support of Field Activities
5.1 Preparation for Field Activities
5.2 Support During Data Collection
6 Data Processing and Data Delivery
6.1 Processing to Support Data Delivery
6.1.1 Schedules for Data Delivery
6.1.2 Data Quality Control System
6.1.3 Transformation
6.1.4 TeleForm/Data Editing of Scanned Forms
6.1.5 Coding
6.2 Data Delivery
6.2.1 Variable Construction
6.2.2 File Deliveries
Appendix A Comprehensive Tables – Household Survey
Table 1-1 Initial MEPS sample size (RUs) and number of NHIS PSUs, all panels
Table 1-2 Data collection periods and starting RU-level sample sizes, spring 2016 through fall 2020
Table 1-3 Percentage of NHIS households with partially completed interviews in panels 3 to 25
Table 1-4 Distribution of panel 25 sampled RUs by sample domain
Table 2-1 Supplements to the CAPI core questionnaire (including hard-copy materials) for 2020
Table 3-1 Spring attrition rate among new and experienced interviewers, 2016-2020
Table 3-2 Fall attrition rate among new and experienced interviewers, 2016-2020
Table 3-3 Annual attrition rate among new and experienced interviewers, 2016-2020
Table 4-1 Authorization form missing rate for spring 2020
Table 4-1A Data collection schedule and number of weeks per round of data collection, 2020
Table 4-2 Case potential categories for classifying and prioritizing case work, spring 2020
Table 4-3 MEPS HC data collection results, panels 16 through 25
Table 4-4 Response rates by data collection year, 2010-2020
Table 4-5 Summary of MEPS Round 1 response and nonresponse, 2015-2020 panels
Table 4-6 Summary of MEPS Round 1 response, 2015-2020 panels, by NHIS completion status
Table 4-7 Summary of MEPS panel 25 Round 1 response rates, by sample domain by NHIS completion status
Table 4-8 Summary of MEPS Round 1 results for RUs who ever refused, panels 19-25
Table 4-9 Summary of MEPS Round 1 results for RUs who were ever traced, panels 19-25
Table 4-10 Interview timing comparison, panels 19 through 25 (mean minutes per interview, single-session interviews)
Table 4-11 Mean contact attempts by NHIS completion status, Round 1 of panels 23-25
Table 4-12 Signing rates for medical provider authorization forms for panels 18 through 25
Table 4-13 Signing rates for pharmacy authorization forms for panels 18 through 25
Table 4-14 Results of Self-Administered Questionnaire (SAQ) collection for panels 19 through 25
Table 4-15 Results of Diabetes Care Supplement (DCS) collection for panels 18 through 24
Table 5-1 Number and percent of respondents who called the respondent information line, 2016-2020
Table 5-2 Calls to the respondent information line, 2019 and 2020
Table 6-1 2020 cases with comments or data check issues
Table 6-2 Total number of cases with comments by category
Table A-1 Data collection periods and starting RU-level sample sizes, all panels
Table A-2 MEPS household survey data collection results, all panels
Table A-3 Response rates by data collection year
Table A-4 Summary of MEPS Round 1 response and non-response
Table A-5 Summary of Round 1 response by NHIS completion status
Table A-6 Summary of MEPS Round 1 results for all RUs who ever refused
Table A-7 Summary of MEPS Round 1 results for RUs who were ever traced, panels 15-24
Table A-8 Interview timing comparison, (mean minutes per interview, single-session interviews)
Table A-9 Mean contact attempts by NHIS completion status, Round 1
Table A-10 Signing rates for medical provider authorization forms
Table A-11 Signing rates for pharmacy authorization formss
Table A-12 Results of Self-Administered Questionnaire (SAQ) collection
Table A-13 Results of Diabetes Care Supplement (DCS) collection
Table A-14 Results of patient profile collection
Table A-15 Calls to respondent information line
Table A-16 Files delivered during 2020
Figure 6-1 Blaise to DEx Transformation
The Household Component of the Medical Expenditure Panel Survey (MEPS-HC, Contract 290-2016-00004I, awarded July 1, 2016, and Contract 75Q80120D00024, awarded July 13, 2020) is the central component of the long-term research effort sponsored by the Agency for Healthcare Research and Quality (AHRQ) to provide timely and accurate data on access to, use of, and payments for health care services by the U.S. civilian non-institutionalized population. The project has been in operation since 1996, each year producing a series of annual estimates of health insurance coverage, health care utilization, and health care expenditures. This report documents the principal design, training, data collection, and data processing activities of the MEPS-HC for survey year 2020.
Data are collected for the MEPS-HC through a series of overlapping household panels. Each year a new panel is enrolled for a series of five in-person interviews conducted over a 2½-year period. Each year a panel completing its fifth interview ends its participation, with the exception this year of panel 23, as described in the section below on changes due to COVID-19. This report describes work performed for all of the panels active during calendar year 2020. Design work conducted during the year consisted of updates and testing for the instruments fielded during the fall of 2020 and spring of 2021. Data collection operations in 2020 were for Panel 23 Rounds 5 and 6, Panel 24, Rounds 3 and 4, and Panel 25, Rounds 1 and 2. Data processing activity focused on delivery of full year utilization and expenditure files for calendar year 2018.
The report touches lightly on procedures and operations that remained unchanged from prior years, focusing primarily on results of the 2020 operations and features of the project that were new, changed, or enhanced for 2020. Tables in the body of the text highlight 2020 results, with limited comparison to prior years. A set of tables showing data collection results over the history of the project is in Appendix A.
Chapter 1 of the report describes the 2020 sample and activities associated with preparing the sample for fielding. Chapters 2 through 5 discuss activities associated with the data collection for 2020: updates to the survey questionnaire and field procedures; field staff recruiting and training; data collection operations and results; and home office support of field activities. Chapter 6 describes data processing and data delivery activities.
Changes Due to COVID-19
All MEPS Household Component (MEPS-HC) face-to-face interviewing ceased on March 17, 2020, due to the impact of COVID-19 on American life. Data collection switched to the telephone mode. Before the shift to telephone, the majority of spring 2020 interviews were done in-person. In fall 2020, in-person data collection resumed briefly in some areas for specific cases, but most data collection remained on the phone for the duration of the field period. In the spring, Round 1 had 48.7 percent of interviews conducted by telephone, Round 3 had 35.4 percent, and Round 5 had 27.0 percent. Almost all of the fall interviews (98.2 percent of both Round 2 and Round 4, 98.7 percent of Round 6) were done by telephone. (In a typical year, around 5-8 percent of interviews are conducted by telephone, mostly student interviews and Round 5 interviews.)
MEPS-HC made several modifications to project systems, processes, and procedures to respond to the pandemic:
Enhancing the Quality of Telephone Interviewing. A website built for collection of health plan cost-sharing documents was repurposed to include show cards and other documents that interviewers would normally present in-person on paper to respondents. Interviewers requested that respondents refer to the online show cards in answering each item or read the show cards out loud, mirroring the in-person protocol. Interviewers received headsets and telephone interviewing protocols, including data quality protocols specific to each round of data collection.
Training for Telephone Interviewing. Interviewers received remote training and continuous guidance on how to shift to telephone data collection. Additional training stressed inclusion of telehealth visits and data quality protocols newsletter items provided additional guidance on recording telehealth events, and after several weeks on the telephone, a memo provided interviewers with feedback about data quality and use of show cards.
Maximizing Response Rates. The project developed and sent COVID-specific letters and postcards tailored for each panel and round to notify households that the study was ongoing and to expect telephone outreach. The project also added efforts to increase return of hard-copy materials, particularly authorization forms (AFs), including a formal protocol for reminder calls, re-mailing unreturned AFs, and a modified in-person protocol for the retrieval of completed AFs, which considerably improved the AF collection rate.
Extension of Panels 23 and 24. Anticipating potential negative impacts of telephone interviewing on response rates and the number of households that would be included in 2020 data and beyond, a decision was quickly made to invite Panel 23 respondents to participate in a fall Round 6 interview with a reference date back to January 1, 2020, and to prepare the computer-assisted personal interviewing (CAPI) instrument and other systems to extend Panel 23 through nine rounds and Panel 24 through at least seven rounds.
Extension of Panel 25 Round 1 into Fall Data Collection. Given the uncertain duration of the COVID-19 pandemic, the fall CAPI instrument and other systems were modified to allow for extension of the spring rounds (and ability to skip the relevant fall round), with the hope that in-person data collection could resume in the fall. Panel 25 Round 1 was extended into the fall field period. CAPI system changes included ensuring collection of authorization forms (AFs) that would normally have been collected for these cases in the fall in Round 2.
Fall CAPI Instrument Changes. The project added three COVID conditions to the conditions look-up list and made minor adjustments to the text in the provider probes section to remind respondents to include telemedicine events.
Preparation for Fall In-Person Data Collection. COVID-19 in-person mitigation protocols were developed and distributed to interviewers who were authorized to conduct in-person interviewing. Interviewers received training on use of personal protective equipment (PPE) and COVID-19 mitigation. In fall 2020, in-person data collection resumed briefly in some primary sampling units (PSUs) for Rounds 1, 2 and 4 and for hard-to-reach or hearing-impaired Round 6 cases. All other data collection remained on the phone for the rest of the field period. Using Westat’s COVID Dashboard for Household Surveys, MEPS monitored conditions for safe in-person interviewing and AF collection. In-person efforts led to both in-person interviews and enhanced the ability to make telephone interview appointments.
Each year a new, nationally representative sample for the Medical Expenditure Panel Survey Household Component (MEPS-HC) is drawn from among households responding to the previous year’s National Health Interview Survey (NHIS). Up until 2020, households in a new panel participated in a series of five interviews that collect data covering two full calendar years. For each calendar year the sample respondents from two panels—one completing its first year in the study (Round 3) and one completing its second year (Round 5)—have been combined for analysis purposes, resulting in a series of annual estimation files. In 2020, with the onset of the COVID19 pandemic, there were concerns of declining response rates as well as challenges in recruiting respondents by telephone for the latest MEPS Panel, Panel 25. As a result, respondents associated with Panel 23, the MEPS Panel scheduled to retire in 2020 after five rounds of data collection, were invited to remain in the study and complete a third year of data collection.
The sample for Panel 25 was selected from among households responding to the NHIS in the preceding year where the NHIS sample was based on the NHIS sample design initially implemented in 2016 (as were Panels 22-24). Specifically, the MEPS household sample was randomly selected from among those that participated in the NHIS during the first three quarters of 2019 and who had been assigned to NHIS Panels 1 and 3, the NHIS Panels designated for MEPS.
This chapter describes the 2020 MEPS sample drawn from 2019 NHIS responding households as well as steps taken to prepare the new sample for fielding.
Table 1-1 shows the starting sample sizes in terms of the number of reporting units (RUs) for all MEPS Panels through Panel 25 and the number of MEPS PSUs from which each Panel was drawn. Note that the change in the number of PSUs for Panel 12 reflects the redesign of the NHIS sample implemented in 2006 (thus, affecting MEPS in 2007), following the 2000 decennial census. The number of PSUs for Panel 25 is based on the number of PSUs associated with MEPS after the 2016 NHIS sample redesign, the fourth such MEPS Panel under this design. The reductions in the number of PSUs after Panel 22 stemmed from further modifications to the NHIS design. The MEPS sample units presented are RUs, each of which represents a set of related persons living together within the same NHIS responding household selected for MEPS participation. Related members of the NHIS households sampled for MEPS and who move as a unit (as well as separate individuals) during the MEPS data collection period form new RUs for interviewing purposes. Each such new RU is followed over the course of the five MEPS data collection rounds and interviewed at their new address.
MEPS data collection is conducted in two main fielding periods each year. Typically, during the January-June period, Round 1 of the new panel and Rounds 3 and 5 of the two continuing panels are fielded, with the panel in Round 5 retiring at mid-year. Normally, during the July-December period, Round 2 of the new panel and Round 4 of the remaining continuing Panel are fielded. However, with a third Panel added for the first time in 2020, a Round 6 for Panel 23 was also fielded. Table 1-2 summarizes the combined workload for the January-June and July-December periods from spring 2016 through fall 2020.
Panel | Initial sample size (RUs)* | MEPS PSUs |
---|---|---|
1 | 10,799 | 195 |
2 | 6,461 | 195 |
3 | 5,410 | 195 |
4 | 7,103 | 100 |
5 | 5,533 | 100 |
6 | 11,026 | 195 |
7 | 8,339 | 195 |
8 | 8,706 | 195 |
9 | 8,939 | 195 |
10 | 8,748 | 195 |
11 | 9,654 | 195 |
12 | 7,467 | 183 |
13 | 9,939 | 183 |
14 | 9,899 | 183 |
15 | 8,968 | 183 |
16 | 10,417 | 183 |
17 | 9,931 | 183 |
18 | 9,950 | 183 |
19 | 9,970 | 183 |
20 | 10,854 | 183 |
21 | 9,851 | 183 |
22 | 9,835 | 168 |
23 | 9,960 | 143 |
24 | 9,976 | 139 |
25 | 10,008 | 139 |
* RUs: Reporting units.
Over the years shown in Table 1-2, the combined spring and fall workload has ranged from a low of 34,126 in 2019 to a high of 40,432 in 2015. Typically, the interviewing workload during the spring field period, when 3 Panels are active, is substantially larger than during the fall but in 2020, the fall field period also had 3 active Panels, although only 15,633 cases were fielded. In 2020, the spring workload of 19,213 RUs was the lowest of the 5 years shown.
Each new MEPS Panel includes some oversampling of population groups of particular analytic interest. Since 2010 (Panel 15), the set of sample domains has included oversamples of Asians, Blacks, and Hispanics. All households set aside in the NHIS for MEPS that have at least one household member in any of these three categories (Asian, Black, or Hispanic) are included in the MEPS sample with certainty. “White and other race” households have been partitioned into two sample domains and subsampled at varying rates across the years. These domains reflect whether an NHIS responding household characterized as “White or other race” provided “complete” information at the household level for the NHIS or if only “partially complete” information was provided.
Data collection period | RU-level sample size |
---|---|
January-June 2016 | 24,694 |
Panel 19 Round 5 | 6,856 |
Panel 20 Round 3 | 7,987 |
Panel 21 Round 1 | 9,851 |
July-December 2016 | 15,390 |
Panel 20 Round 4 | 7,729 |
Panel 21 Round 2 | 7,661 |
January – June 2017 | 24,773 |
Panel 20 Round 5 | 7,611 |
Panel 21 Round 3 | 7,327 |
Panel 22 Round 1 | 9,835 |
July – December 2017 | 14,396 |
Panel 21 Round 4 | 7,025 |
Panel 22 Round 2 | 7,371 |
January – June 2018 | 23,768 |
Panel 21 Round 5 | 6,899 |
Panel 22 Round 3 | 7,023 |
Panel 23 Round 1 | 9,846 |
July – December 2018 | 14,123 |
Panel 22 Round 4 | 6,789 |
Panel 23 Round 2 | 7,334 |
January – June 2019 | 20,723 |
Panel 22 Round 5 | 6,624 |
Panel 23 Round 3 | 6,773 |
Panel 24 Round 1 | 7,326 |
July – December 2019 | 13,403 |
Panel 23 Round 4 | 6,569 |
Panel 24 Round 2 | 6,834 |
January – June 2020 | 19,213 |
Panel 23 Round 5 | 6,413 |
Panel 24 Round 3 | 6,382 |
Panel 25 Round 1 | 6,418 |
July – December 2020 | 15,633 |
Panel 23 Round 6 | 5,264 |
Panel 24 Round 4 | 5,574 |
Panel 25 Round 2 | 4,795 |
As background, the partitioning of the “White, Other” domain into these two domains began in 2011 (Panel 16). The partial completes were sampled at a lower rate than the full completes in order to lessen the impact on the field effort resulting from the difficulty of gaining the cooperation of these households. The last two columns in Table 1-3 show the subsampling rates for the two groups since Panel 16. The partial completes in the “White, Other” domain have been subsampled at rates ranging from a low of 40 percent (Panel 17) to a high of 53 percent (Panel 20).
Panel | Percentage with partially completed interviews | Subsampling rate for NHIS completes in “White, other” domain | Subsampling rate for partial completes in “White, other” domain |
---|---|---|---|
3 | 10 | ||
4 | 21 | ||
5 | 24 | ||
6 | 22 | ||
7 | 17 | ||
8 | 20 | ||
9 | 19 | ||
10 | 16 | ||
11 | 23 | ||
12 | 19 | ||
13 | 25 | ||
14 | 26 | ||
15 | 21 | ||
16 | 25 | 79 | 46 |
17 | 19 | 51 | 40 |
18 | 22 | 63 | 43 |
19 | 18 | 66 | 42 |
20 | 19 | 84 | 53 |
21 | 22 | 81 | 49 |
22 | 15 | 77 | 49 |
23 | 12 | 79 | 49 |
24 | 12 | 79 | 50 |
25 | 11 | 77 | 50 |
*The figures in the second column of the table are the proportion of partial completes in the total delivered sample, after subsampling. The figures in the third and fourth columns are subsampling rates applied to the two White/Other subdomains in Panels 16 through 25.
Sample domain | Number | Percent |
---|---|---|
Asian | 720 | 7.19 |
Black | 1,840 | 18.39 |
Hispanic | 1,400 | 13.99 |
White, other | 6,048 | 60.43 |
NHIS complete | 5,594 | 55.90 |
NHIS partial complete | 454 | 4.54 |
Total | 10,008 |
The 2020 MEPS sample was received from AHRQ and NCHS in two deliveries. The first delivery, containing households sampled from the first two quarters of the 2016 NHIS, was received on October 4, 2017. Households selected from the third quarter of the NHIS were delivered on December 11, 2018.
The September delivery of the first two-thirds of the new sample is instrumental to the project’s schedule for launching interviewing each year in early January. The partial file gives insight into the demographic and geographic distribution of the households in the new Panel. This information, when combined with information on older Panels continuing in the new year, guides project decisions on the number and location of new interviewers to recruit.
Upon receipt of the first portion of the 2020 sample, project staff also reviewed the NHIS sample file formats to identify any new variables or values and to make any necessary changes to the project programs that use the sample file information. Following this initial review, staff proceeded with the standard processing through which the NHIS households are reconfigured to conform to MEPS reporting unit definitions and prepared the files needed for advance mailouts and interviewer assignments. The early sample delivery also allows time for checking and updating NHIS addresses to improve the quality of the initial mailouts and to identify households that have moved since the NHIS interview.
Each year, the project makes a number of changes to the CAPI instrument used to collect MEPS data as well as to the field procedures followed by the interviewers who collect the data. MEPS implemented CAPI modernization as part of the technology upgrade launched in spring 2018. For 2020, there were a few significant revisions to the instrument in addition to minor ones, as detailed below.
Each data collection cycle, AHRQ works with Westat to define a set of modifications to the MEPS-HC instrument. Some modifications are new items or new sections, whereas other are updates or fixes to existing items.
One change was made to the NHIS questionnaire in 2019, which consisted of no longer collecting information that MEPS previously used to preload into the re-enumeration section of the MEPS questionnaire for Round 1 cases as well as the face sheet. NHIS continues to enumerate everyone who is a part of the family and selects one sampled adult and one sampled child. However, the NHIS is no longer collecting full demographic information for all household members. Most of the demographic information for the sampled adult and sampled child is available but missing for the other DU members. Information missing from CAPI includes date of birth, marital status, and potentially last name. This impacted the interview process for interviewers as there is less information available to verify and more to collect. Additionally, interviewers have less context when approaching the household.
The changes for the 2020 data collection period, both spring and fall, are summarized by section below:
Start/Restart (ST). For Round 1 cases where the RU members had a refused or don’t know value for the name during the NHIS, an algorithm was created that incorporated the person’s age and sex if known. Instead of seeing empty entries in the name field for these Round 1 cases, interviewers now see something like “24 year old female refused.”
Priority Condition Enumeration (PE). In an effort to reduce the number of other (specify) responses at PE90, which asks the respondent to specify the kind of heart condition, heart disease or other kind of coronary heart disease, angina or heart attack, a show card was added to this question.
Conditions Look-up. In spring 2020, a conditions look-up feature was added to CAPI in order to increase data quality for conditions as well as reduce burden for interviewers and respondents. Interviewers can now search a list of approximately 980 predetermined condition names. There is an option to select a condition name from the look-up or add one manually. The new condition look-up functions similarly to the provider search look-up in that it uses a trigram search method. This look-up also formalized the probing requirements for conditions. Due to the COVID-19 pandemic, in fall 2020, three new conditions were added to the condition look-up.
Provider Probes (PP). Due to the pandemic, there was a global shift to the provision of health care to telemedicine. Telemedicine is a rapidly growing part of our nation’s health care services that medical providers have had to implement in their practices in order to accommodate the required distance the pandemic has imposed on everyday life. Health professionals use “information and communication technologies (such as computers, the Internet, and cell phones) for the exchange of valid information for diagnosis, treatment and prevention of disease and injuries”1 (World Health Organization). In order to capture this type of care, MEPS added language to questions PP40 and PP200 in the Provider Probes (PP) section to include care received by telemedicine.
Date Picker. A fill was added for Round 5 hospital and institutional (HS/IC) stays in the date picker interviewer instructions for discharge date. It now informs the interviewer to consider a person to still be in the hospital if they were still there on December 31 rather than “today” (the interview date). This change will help prevent interviewer confusion in round 5 when respondents report a HS/IC event on the end date of the reference period (12/31) and a discharge date past the end of the reference period.
Event Roster (EV). When following-up on linked HS, ER, and IC events, the roster that displays eligible events to be linked was previously displaying events with a delete flag. The MEPS-HC CAPI instrument was updated to remove those deleted events from display.
Prescribed Medicine (PM). During analysis, AHRQ identified that field interviewers occasionally entered two or more drugs as one entry. While uncommon, these errors caused significant issues during delivery and for analysis. In Fall, an edit check was implemented for medicines added to the roster. This edit checked looked at each medicine entered to determine if it contained two or more strings of strength or form indicators (e.g., two instances of the phrase “MG”, “ML”, “CAP” or “TAB”. Entries that did contain two or more of these strings were flagged to the interviewer as potential error for review and correction,
Food Security (FS). The addition of the Food Security (FS) section in the fall of 2020 consisted of a set of 11 questions on food security, which were similar to the ones administered in the NHIS. This section was first asked in fall 2016 and reinstated for fall 2020.
Health Insurance (HX) and Related Sections. In spring 2020, CAPI was updated to exclude private health insurance with only non-comprehensive types for high deductible and HSA items in HX and OE (Old Employment section) and include private insurance with coverage that had been coded as “Don’t Know” (DK) or “Refused” (RF). Additionally, all Small Business Health Options Program (SHOP) items were removed from the HX section.
Policy Booklet (PB). In spring 2020, a new section was added to request households submit documents that provide detailed information about cost-sharing between individuals and insurance companies.
Supplements to the CAPI Instrument
Table 2-1 shows the supplements in the CAPI instrument for the rounds administered in calendar year 2020.
Supplement | Round 1 | Round 2 | Round 3 | Round 4 | Round 5 | Round 6 |
---|---|---|---|---|---|---|
Child Health | X | X | ||||
Quality Supplement | X | X | X | X | X | X |
Access to Care | X | X | X | |||
Income | X | X | ||||
Assets | X | |||||
Medical Provider Authorization Forms for HS, OP, and ER Events | X | X | X | X | X | X |
Medical Provider Authorization Forms for MV, TH, HH, and IC Events | X | X | X | X | X | |
Pharmacy Authorization Forms | X | X | X | X | X | |
Your Health and Health Opinions (SAQ/PSAQ) | X | Round 2 follow-up | X | Round 4 follow-up | X | |
Diabetes Care Supplement (DCS) | X | X | ||||
Policy Booklet (PB) | X | X |
Changes to supplemental items included:
1World Health Organization. (2010). Telemedicine: Opportunities and developments in Member States: report on the second global survey on eHealth 2009. Available at goe_telemedicine_2010.pdf (who.int).
Testing for the spring 2020 (Rounds 1/3/5) application was completed in November 2019. Testing of the fall 2020 (Rounds 2/4/6) instrument was completed in June 2020, taking an additional month and two additional builds to fully test all of the changes due to COVID-19. Spring 2020 (Rounds 1/3/5) was the third year of fielding the MEPS Technical Upgrade (Tech Up) CAPI instrument. Many of the new testing approaches developed during 2018 for pre-launch testing of the Tech Upgrade CAPI instrument have been adapted and continued to maintain a comprehensive testing plan that supported the ongoing instrument development schedule.
CAPI instrument development and testing included multiple programming/testing iterations that each lasted several weeks. As the specifications and programming progressed, a full suite of test scripts was updated. Defined by the design staff and analytic leads at Westat, the regression testing suite of scripted test cases targeted the unique characteristics of approximately 80 percent of the cases fielded. These testing scripts represented common paths through the instrument and covered all rounds of the CAPI instrument.
The testing ensured that CAPI followed the design as intended and assessed whether the layout of the overall screen for a given question, and across questions, consistently met the requirements designed to minimize measurement error. Feature testing thoroughly tested all new features against specifications, including wording, text fills, legal and illegal responses, boundary conditions, and skip patterns. Testers validated every possible variation allowed by the specifications.
Primarily conducted by corporate testers and MEPS project staff, free-form testing focused on design changes in the current instrument build and ensured that any reported instrument bugs had been fixed. Free-form testing was also utilized by trained programming staff to ensure the stability of the CAPI data model and to evaluate the stored data in new or unusual situations to see how well the CAPI application tracked and maintained the associated data changes. Testers routinely pushed array limits, used back-up, changed answers, and used break-off and restart cases to challenge performance boundaries.
Project and systems staff performed all testing in closely coordination with the design team. For each of the spring and fall instruments, AHRQ received an alpha delivery and conducted its own testing. The following month, AHRQ received a beta delivery and conducted additional testing.
The test script suite continued to be executed through alpha and beta for the spring and fall testing cycles. Additional testing components, including enhanced integration testing and ad hoc/free-form testing, were conducted. The enhanced integration testing allowed project staff to check electronic Face Sheet information, test the Interviewer Assignment Sheet, and make entries into the electronic record of calls and refusal evaluation form. The ad hoc testing component used information derived from actual cases to verify that all management information on the laptop is being brought forward correctly from previous rounds. Using actual case data also allowed staff to check uncommon paths through the MEPS instrument so that specific changes to the questionnaire could be thoroughly tested.
The manuals and the materials for the 2020 field effort were updated as needed to reflect changes to the questionnaire and management systems. Below is a description of the key changes to the materials and procedures.
Instructional Manuals
The field interviewer manual was updated to address changes in field procedures and updates to the Interviewer Management System (IMS).
Electronic Materials
The electronic face sheet in the IMS provides interviewers with information needed to contact their assigned households and familiarize themselves with the composition of the household and relevant details about their prior history with the survey in preparation for coming interviews. The IMS also contains an RU Information module for documenting operational information to help the next round’s interviewer effectively work each case, an RU Contact module for reporting address and telephone number changes identified prior to the CAPI interview, and the Interviewer Assignment Sheet, which supports follow-up for AFs and SAQs not completed at the time of the interview.
There were no changes to the face sheet for the spring 2020 data collection period. Due to the extension of Panel 23, the fall 2020 face sheet was updated to display information from Round 5 for Panel 23 Round 6 cases. The fall 2020 face sheet also included a section for policy booklet follow-up. If there was an outstanding policy booklet request from the previous round, the details of the policy were displayed for the interviewer to discuss with the respondent.
A section was added to the spring 2020 Interviewer Assignment Sheet (IAS) to display policy booklet requests from CAPI. The information in this section was used by the field interviewer to perform the policy booklet data collection tasks described in Section 4.5. The Ship to Receipt Module in the online Basic Field Operations System (BFOS) was also updated to show pending policy booklet requests.
Advance Contact and Other Case Materials
All respondent letters, monthly planners, and self-administered questionnaires were updated with the appropriate year references, and the Income Job Aid was updated with 2017 data. The MEPS logo was added to respondent cooperation materials that didn’t previously display the logo. Respondent letters, the community authorization letter, and the authorization form booklet were updated with the signature of the new acting director for NCHS.
To facilitate the phone interviewing that became necessary due to COVID-19, the MEPSDOC website built for cost-sharing document collection was modified to provide access to online show cards and other documents that interviewers would normally present to respondents in-person.
Further updates were made to the advance letters for fall 2020 data collection to address the impact of COVID-19 and possible telephone interviewing. Signatures were also updated to the new director for NCHS.
For spring 2020 data collection, MEPS abandoned the streamlined interviewer staffing model implemented in 2019 and reverted to the prior staffing approach that aimed to start spring data collection with approximately 400 interviewers.
Based on a projected sample size across all three Panels for spring 2020 of approximately 24,800 RUs, the estimated number of experienced MEPS interviewers likely to be available at the end of fall 2019 data collection (about 264), including a MEPS travel team of 10 to 12 members, Westat estimated needing to recruit approximately 136 new interviewers for the standard staffing model. The goal was to start data collection with approximately 400 interviewers actively working during the spring 2020 data collection period.
Recruiting of new field interviewers for 2020 began in October 2019 and continued into January 2020. For the 2020 recruiting, MEPS used the Westat web-based recruitment management system through which applicants apply online. One hundred forty-four interviewers were hired; of those hires, 23 dropped out before training and 121 attended training and completed the training program. With the addition of these new trainees, the project began 2020 data collection with a total of 390 interviewers. Of this total, 39 new interviewers and 54 experienced interviewers were lost to attrition during the spring interviewing rounds. An additional 16 new interviewers and 7 experienced interviewers were lost during the fall rounds. Total attrition for the year was 29.7 percent, the highest attrition level MEPS has experienced in the past 5 years. In looking forward to 2021, MEPS plans to recruit to expand the interviewing staff to begin data collection with at least 400 interviewers. The breakdown of interviewer attrition is shown in Tables 3-1, 3-2, and 3-3. Table 3-1 shows the overall attrition rate during the spring data collection period from 2016 through 2020. Note that the total spring 2020 attrition rate was 23.8 percent, the highest in the past 5 years. This was driven by high attrition rate among both the 2020 new hires (32.2%) and experienced interviewers (20.1%).
Data collection period | New interviewers lost | Experienced interviewers lost | Total interviewers lost | |||
---|---|---|---|---|---|---|
# | % | # | % | # | % | |
Spring 2016 | 20 | 27.0% | 28 | 7.7% | 48 | 10.9% |
Spring 2017 | 18 | 20.7% | 24 | 6.7% | 42 | 9.4% |
Spring 2018 | 26 | 34.7% | 33 | 9.6% | 59 | 14.0% |
Spring 2019 | 8 | 29.6% | 56 | 17.2% | 64 | 18.2% |
Spring 2020 | 39 | 32.2% | 54 | 20.1% | 93 | 23.8% |
Table 3-2 shows the overall attrition rate during the fall data collection period from 2016 through 2020. Note that the total fall 2020 attrition rate was 8.0 percent, a rate comparable to 4 of the prior 5 years. An unusual staffing phenomenon occurred at the start of fall data collection: 23 interviewers returned from temporary inactive status. These interviewers who went on inactive status during spring data collection were counted as part of the spring attrition numbers since they were not engaged in data collection. The attrition figures for fall data collection are based on what the MEPS interviewer staffing level was at the end of spring data collection; there was no adjustment made for this increase in available staff at the start of fall data collection.
As noted previously, the higher fall attrition rate in fall 2017 resulted when MEPS lost some PSUs due to the new sampling frame.
Data collection period | New interviewers lost | Experienced interviewers lost | Total interviewers lost | |||
---|---|---|---|---|---|---|
# | % | # | % | # | % | |
Fall 2016 | 6 | 11.1% | 24 | 7.1% | 30 | 7.7% |
Fall 2017 | 10 | 14.5% | 44 | 13.1% | 54 | 13.4% |
Fall 2018 | 10 | 20.4% | 16 | 5.1% | 26 | 7.2% |
Fall 2019 | 4 | 21.0% | 20 | 7.4% | 24 | 8.3% |
Fall 2020 | 16 | 19.5% | 8 | 3.7%* | 24* | 8.0% |
Table 3-3 shows the annual attrition rate across new and experienced interviewers from 2016 – 2020. The annual attrition rate for 2020 was 30 percent, the highest rate in the past 5 years. The extremely high rate of attrition among new hires can be attributed in large part to the pandemic conditions when MEPS changed from in-person interviewing to telephone interviewing. The expectation among new hires was that they would be conducting in-person interviews. In addition, many MEPS interviewers went on temporary inactive status to care for family members with COVID and to do home schooling for children.
Data collection period | New interviewers lost | Experienced interviewers lost | Total interviewers lost | |||
---|---|---|---|---|---|---|
# | % | # | % | # | % | |
2016 | 26 | 35.1% | 52 | 14.2% | 78 | 17.8% |
2017 | 28 | 32.2% | 68 | 18.9% | 96 | 21.5% |
2018 | 36 | 48.0% | 49 | 14.2% | 85 | 20.2% |
2019 | 12 | 44.4% | 76 | 23.4% | 88 | 25.0% |
2020 | 55 | 45.0% | 62 | 23.0% | 117 | 30.0% |
The overall structure for training new interviewers in 2020 followed the pattern established in prior years. It began with home study, followed by an in-person training conducted in Birmingham, AL, in late January 2020, and ending with completion of a two-part post-classroom home study component.
Pre-Classroom Home Study. This package included a project laptop and an interactive self-paced workbook with exercises and online modules including videos and quizzes administered through Westat’s Learning Management System (LMS). The LMS generated regular reports allowing home office and field management staff to monitor the completion of each trainee’s home study. New hires received their home study package early enough to complete the package before the in-person training, but not so early that their introduction to important study concepts and project terminology would degrade before the in-person training.
In-Person Training. The in-person training closely followed the agenda implemented in spring 2019 with modifications to enhance content on health care event enumeration and event typing and to introduce cost-sharing document collection. For the 7½ days of project-specific training, each trainee was assigned to one of seven training communities staffed by a primary and support trainer and two or more classroom runners. In addition to lectures on study procedures and questionnaire content, trainees completed mock interviews and dyad role-plays using the Round 1, Round 3, and Round 5 questionnaires. Mock interviews are instructor-led, full MEPS interviews whereas “mini-mocks” focus on one or more individual sections of the interview. Dyads pair two trainees together with one trainee taking on the role of the respondent and the other trainee assuming the role of the field interviewer. The mocks and dyads included training on the use of electronic case materials and completion of the electronic Interviewer Assignment Sheet (IAS). Multiple “mini-mock” interviews—interviews with data pre-entered to allow trainees to directly access the specific section to be addressed in a given session—allowed for in-depth sessions on the more complex sections of the CAPI questionnaire such as household re-enumeration and utilization and charge payment without necessitating the completion of a full mock interview or dyad practice. Trainees received instruction and practice in use of the IMS and ways of introducing the survey and answering respondent questions.
The in-person training component maintained the emphasis on interviewer behaviors and interviewing techniques that facilitate complete and accurate reporting. Trainers were instructed to reinforce good interviewing behaviors during mock interviews. Good interviewing behaviors include reading questions verbatim, training respondents to use records to aid recall, actively engaging respondents in the use of show cards, and using active listening and probing skills. Trainers called attention to instances in which interviewers demonstrated such behaviors. To enhance trainee awareness of behaviors that affect data quality, dyad scripts included instructions to take a “time-out” at certain items in the interview to highlight relevant data quality issues.
To ensure training participants had access to additional coaching and practice, three 1-hour structured evening practice labs were scheduled in the evenings on days 2, 3, and 5 of training. Additional evening help labs were held on the first few days of training to assist trainees with accessing their electronic timesheet to allow for the real-time reporting of time and expenses that is a Westat corporate requirement. One hundred twenty-one new hires successfully completed training.
Bilingual trainees who had been certified by Westat for their proficiency in Spanish were trained alongside other new interviewers. An additional half day of bilingual training was held following the conclusion of regular project training. This session focused on procedures and techniques that are of particular importance to interviewing Spanish-speaking households, including practicing refusal aversion/conversion techniques in Spanish. Bilingual trainees were paired so that they could conduct practice interviews in Spanish. Nineteen new interviewers successfully completed 2020 bilingual training.
Post-Classroom Home Study. The post-classroom home study was administered in two parts. New interviewers left in-person training with the first component of the home study. It contained instruction and exercises on locating techniques and working with proxy respondents, an exercise on secure messaging (BSM), and tips for improving data quality from experienced interviewers. New interviewers were required to complete this training before beginning their fieldwork.
The second component of the post-classroom home study was sent to new interviewers in March. It focused on less-common interviewing situations including case management of related RU members who are identified as being institutionalized and handling NHIS students. Several interactive modules on repeat co-pays and tools and techniques applied to the data quality continuum were administered through Westat’s LMS. A quiz with immediate feedback functionality was also administered through the LMS. Interviewers were instructed to complete this second home study component by mid-March. Daily reports generated by the LMS allowed home office training staff, field managers, and field supervisors to monitor interviewer progress.
Spring 2020 Round 1/3/5 Home Study. The Round 1/3/5 home study in January 2020 followed established formats but was expanded to accommodate cost-sharing document collection protocols. Interviewers completed a two-part home study. Part 1 of the home study focused on CAPI and IMS updates, field procedures, and the introduction of the CAPI condition look-up tool. Part 2 of the home study introduced several LMS-based video modules on the procedures associated with cost-sharing document collection. The 3.5-hour self-paced program contained an instructional memo, independent CAPI practice, e-learning modules, and a quiz.
COVID-19 Pandemic Response. In response to the COVID-19 pandemic, MEPS interviewers began to conduct interviews by telephone. Interviewers received training on the procedures associated with phone interviewing including methods for maintaining data quality. MEPS repurposed the cost-sharing document collection website with respondent materials for phone interviewing. These items included the MEPS show cards, the Records job aid, MEPS Record Keeper, and the informed consent document. Interviewers received training on the use and administration of the materials on the telephone interview website.
Westat also developed training material on the use of personal protective equipment (PPE) and COVID-19 procedural guidelines in anticipation of a return to in-person interviewing.
In-Person Refresher Training. Due to the COVID-19 pandemic, the refresher training scheduled for August 2020 was canceled.
Fall 2020 Round 2/4 Home Study. The Round 2/4 home study in July 2020 followed established formats. The 2-hour self-paced program contained an instructional memo, example materials, and a quiz. Topics included the extension of the rounds in response to the COVID-19 pandemic, additional training on telephone interviewing and the use of the telephone-interviewing website for respondents, COVID mitigation protocols, and follow-up cost-sharing document collection. New interviewers hired in the spring were required to complete a mock interview with their supervisor, field manager, or designated senior interviewer before beginning the fall rounds of data collection.
Weekly Newsletter. In 2020, MEPS continued its field interviewer newsletter in a weekly format. As such, the newsletter allows for additional training opportunities in a concise format and the ability to deliver content as needed to the field. Topics include CAPI questionnaire topics, procedural content, and answers to field interviewer questions.
This chapter describes the MEPS-HC data collection operations and provides selected results for the five rounds of MEPS-HC interviewing conducted in 2020. Selected comparisons to results of prior years are also presented. Tables showing results for all years of the study are provided in Appendix A.
MEPS data collection management relies on a set of interrelated systems and procedures designed to accomplish three goals: efficiency, data quality, and cost containment. The systems include the Basic Field Operating System (BFOS), which facilitates case management through case assignment, case status and hours reporting, data quality reporting, and interviewer efficiency. Related systems include the computer-assisted recorded interview (CARI) system and the MEPS supervisor dashboard, which was placed into production in 2018. The CARI system’s CARI code allows for review of recordings for selected interview items to assist in the assessment of interviewer performance and question assessment. The MEPS supervisor dashboard provides views into daily and weekly management tasks related to the tracking of hours per complete, key alerts from casework in the field, the management of weekly production goals, and a number of metrics designed to facilitate weekly field calls with interviewers regarding hours worked, production, and interview quality. These tools, along with the implementation of models designed to identify cases with a higher propensity for completion, as well as on-hold procedures designed to prevent the overwork of cases in the field, form a comprehensive framework for the management of MEPS data collection.
Due to the COVID pandemic, the procedures followed in the 2020 data collection differed greatly than those of prior years.
As in prior years, respondent contact materials provided respondents with the link to the MEPS website (www.meps.ahrq.gov), a toll-free number to Alex Scott, a study representative at Westat, and the link to the Westat website (www.westat.com). Calls received from the Alex Scott line were logged into the call-tracking system and the appropriate supervisor notified so that he/she could take the proper course of action.
The advance contact calls to Panel 25 Round 1 households were made by a subset of the experienced MEPS interviewers.
Typically, for Round 1 households, interviewers are instructed, with few exceptions, to make initial contact with the household in-person. For later rounds, interviewers are allowed to make initial contacts to set appointments by telephone, so long as the household had been cooperative in prior rounds. In response to COVID-19, all in-person interviewing ceased on March 17, 2020, and all contacts and interviews were conducted over the telephone. Prior to 2020, interviews conducted on the telephone represented only 8 percent of interviews. Guidelines and procedures for changing the mode of data collection from in-person to telephone were developed and distributed on a flow basis between March 17 and April 7, 2020, by Panel/round and within panel/round by household size. Interviewers started telephone interviewing with Round 5 cases, since those households were most familiar with MEPS and the interview requirements. Interviewers began Round 5 telephone interviewing with small households. Cases continued to be released in priority order, reserving the Round 1 telephone interviewing as the last cases to be released for telephone interviewing.
Procedures for collecting the medical and pharmacy authorization forms for the Medical Provider Component (MPC) and self-administered questionnaires underwent significant changes due to the pandemic. Since a large portion of interviews were conducted on the telephone after March 17, the MEPS home office developed procedures for interviewers to mail authorization forms to respondents and have them return it to the home office via business reply envelope (BRE). After the in-person interview, the forms were generated and mailed by the interviewer from home shortly after the interview was completed, along with a BRE and the incentive check. The interviewer made a phone call to follow up within several days. The change in procedure had an impact on the number of missing authorization forms collected in the spring, as Table 4-1 shows.
Round | RUs missing all AFs before 3/15/20 | RUs missing all AFs after 3/15/20 |
---|---|---|
1 | 34% | 64% |
3 | 19% | 58% |
5 | 16% | 45% |
As noted in the table, there was a 30 percentage point increase in RUs with missing AFs after March 15. In fall 2020, additional protocols were implemented to address the steep decline in returned signed authorization forms, including instituting a procedure for interviewers to place up to three reminder calls to ensure AFs complete and returned or ready for pick up.
When conditions were deemed safe, contactless AF pickup was instituted in the fall field period. Additionally, a re-mail effort in late fall 2020 was introduced to mail new sets of AFs to RUs where AFs were expected but not received. This was paired with the reminder calls for RUs with larger number of AFs or hospital visits.
MEPS field managers, field directors, and the task leader for field operations continued to manage the field data collection in collaboration with the field supervisors, reinforcing the importance of balancing data quality with production and cost goals across regions. Field staff refer to this collaborative effort as the “No Region Left Behind” approach.
Throughout the year Westat continued to review data for all respondents reported to have been institutionalized in order to identify any individuals who might have been inappropriately classified and, as a result, treated as out of scope for MEPS data collection.
Data Collection Schedule. The sequence for beginning the spring rounds of data collection, most recently adjusted in 2014, was maintained for the spring round of 2020. Data collection began with Round 5, followed by Round 3, and then Round 1. For the Round 1 respondents, the later starting date allowed several additional weeks of elapsed time in which respondents could experience health care events to report in their Round 1 interview, with these additional events giving them a more realistic understanding of what to expect in the subsequent rounds of the study. In order to maintain the highest levels of quality of MEPS data, a decision was made to extend Panels 23 and 24 to nine rounds; therefore there was no exit round in 2020. Additionally, Round 1 was extended into the fall in order to secure cooperation from as many households as possible. For those RUs (n=117) that completed the Round 1 interview in the fall, they did not have a Round 2 interview at all and continued with the Round 3 interview in spring 2021.
The field period dates for the 6 rounds and the extended round 1 data collection conducted in 2020 are shown in Table 4-1A.
Round | Dates | No. of weeks in round |
---|---|---|
1 | January 24 – July 14 | 24 |
2 | July 28 – December 7 | 19 |
3 | January 17 – June 15 | 21 |
4 | July 5 – December 7 | 19 |
5 | January 10 – May 15 | 18 |
6 | August 4 – November 30 | 18 |
Extended Round 1 | August 18 – December 7 | 15 |
Data Quality (DQ) Monitoring. The MEPS DQ field monitoring system and procedures allowed supervisors and field managers to identify interviewers whose work deviated from quality standards and who might need additional coaching on methods for getting respondents to more completely report their health care events. CARI review was further integrated into weekly monitoring activities with supervisors listening to portions of roughly 1,000 interviews per field period. These reviews were used to reinforce positive interviewing behaviors and techniques, and listening to CARI has given field supervisors direct exposure to interviewing behaviors that need to be addressed. In some cases, CARI recording results were such that interviewers were instructed to stop working until they could receive some re-training, including administering a practice interview to their field supervisor. This effort was supported by DQ alerts built into the supervisor dashboard to identify possible DQ issues related to record use and event entry. Supervisors investigated these issues and retrained when necessary.
Case On-hold for Work Plan Procedures. The project implemented a model designed to detect cases at risk for overwork or in need of review to determine the viability of a case compared to other pending cases. At-risk cases are automatically placed in an on-hold status for supervisor and field manager review. Only cases with a supervisor drafted and field manager approved work plan tailored to achieve a successful interview are removed from the on-hold status and assigned back to an interviewer for additional targeted completion attempts. At various points in the round, cases with an on-hold status are reassessed in the context of remaining pending cases to determine if any should be released to the field for further work. This practice is designed to produce completes with fewer attempts and more efficient use of resources for refusal conversion and locating activities. Poor quality attempts are avoided and field effort is reduced. The reintroduction of cases with a proper work plan is designed to allow for a high rate of response by tailoring work for cases before they are overworked or removed from the field as non-viable. This practice was suspended on March 15, 2020, and throughout the remainder of the year in lieu of electronic record of call (EROC) monitoring more suited to telephone interviews. The low effort of telephone attempts allowed for additional attempts at maintained or reduced effort.
Case Potential Listing. The project continued the use of a model predicting a completed interview from a given case (“propensity to complete”) relative to other pending cases in a region. The model is designed to identify cases with a high likelihood of completion at that point in the field period relative to other pending cases. The model is dynamic and is updated weekly based on the specific conditions for pending cases at that time. The model was tested in 2019 to determine if updates were necessary to better fit the data; however, the existing model remains well-suited to current interview conditions and remains in effect even for telephone interviews.
Information from this model is integrated into BFOS (the system used for case management), providing propensity to complete as part of a comprehensive view of a case for a given week. Supervisors were to instruct interviewers—in the absence of other field information that would dictate otherwise—to attempt these cases during the next production week. Table 4-2 illustrates the potential categories used to classify cases on a weekly basis to promote field efficiency.
Potential categories for pending MEPS cases |
---|
High potential (unworked) |
High potential (worked) |
Appointment |
Low potential |
Low potential refusal |
Remainder |
Locating |
Table 4-3 provides an overview of the data collection results for Panels 16 through 25, showing sample sizes, average interviewer hours per completed interview, and response rates. Table 4-4 shows the final response rates a second time, reformatted to facilitate by-round comparisons across Panels and years. Both tables display the additional Round 6 data new for 2020.
Panel/round | Original sample | Split cases (movers) | Student cases | Out-of-scope cases | Net sample | Completes | Average interviewer hours/complete | Response rate (%) | Response rate goal | |
---|---|---|---|---|---|---|---|---|---|---|
Panel 16* | Round 1 | 10,417 | 504 | 98 | 555 | 10,940 | 8,553 | 11.4 | 78.2 | 80.0 |
Round 2 | 8,561 | 252 | 42 | 34 | 8,821 | 8,351 | 7.6 | 94.7 | 95.0 | |
Round 3 | 8,351 | 232 | 19 | 28 | 8,574 | 8,256 | 6.4 | 96.1 | 96.0 | |
Round 4 | 8,232 | 155 | 16 | 13 | 8,390 | 8,162 | 6.6 | 97.3 | 97.0 | |
Round 5 | 8,143 | 67 | 13 | 25 | 8,198 | 7,998 | 5.5 | 97.6 | 98.0 | |
Panel 17 | Round 1 | 9,931 | 490 | 92 | 127 | 10,386 | 8,121 | 11.7 | 78.2 | 80.0 |
Round 2 | 8,113 | 230 | 35 | 19 | 8,359 | 7,874 | 7.9 | 94.2 | 95.0 | |
Round 3 | 7,869 | 180 | 15 | 15 | 8,049 | 7,663 | 6.3 | 95.2 | 96.0 | |
Round 4 | 7,656 | 199 | 19 | 30 | 7,844 | 7,494 | 7.4 | 95.5 | 97.0 | |
Round 5 | 7,485 | 87 | 10 | 23 | 7,559 | 7,445 | 6.1 | 98.5 | 98.0 | |
Panel 18 | Round 1 | 9,950 | 435 | 83 | 111 | 10,357 | 7,683 | 12.3 | 74.2 | 80.0 |
Round 2 | 7,691 | 264 | 32 | 16 | 7,971 | 7,402 | 9.2 | 92.9 | 95.0 | |
Round 3 | 7,402 | 235 | 21 | 22 | 7,635 | 7,213 | 7.6 | 94.5 | 96.0 | |
Round 4 | 7,203 | 189 | 14 | 22 | 7,384 | 7,172 | 7.5 | 97.1 | 97.0 | |
Round 5 | 7,163 | 94 | 12 | 15 | 7,254 | 7,138 | 6.2 | 98.4 | 98.0 | |
Panel 19 | Round 1 | 9,970 | 492 | 70 | 115 | 10,417 | 7,475 | 13.5 | 71.8 | 80.0 |
Round 2 | 7,460 | 222 | 23 | 24 | 7,681 | 7,188 | 8.4 | 93.6 | 95.0 | |
Round 3 | 7,168 | 187 | 12 | 17 | 7,350 | 6,962 | 7.0 | 94.7 | 96.0 | |
Round 4 | 6,946 | 146 | 20 | 23 | 7,089 | 6,858 | 7.4 | 96.7 | 97.0 | |
Round 5 | 6,856 | 75 | 7 | 24 | 6,914 | 6,794 | 5.9 | 98.3 | 98.0 | |
Panel 20 | Round 1 | 10,854 | 496 | 85 | 117 | 11,318 | 8,318 | 12.5 | 73.5 | 80.0 |
Round 2 | 8,301 | 243 | 39 | 22 | 8,561 | 7,998 | 8.3 | 93.4 | 95.0 | |
Round 3 | 7,987 | 173 | 17 | 26 | 8,151 | 7,753 | 6.8 | 95.1 | 96.0 | |
Round 4 | 7,729 | 161 | 19 | 31 | 7,878 | 7,622 | 7.2 | 96.8 | 97.0 | |
Round 5 | 7,611 | 99 | 13 | 23 | 7,700 | 7,421 | 6.0 | 96.4 | 98.0 | |
Panel 21 | Round 1 | 9,851 | 462 | 92 | 89 | 10,316 | 7,674 | 12.6 | 74.4 | 80.0 |
Round 2 | 7,661 | 207 | 32 | 17 | 7,883 | 7,327 | 8.5 | 93.0 | 95.0 | |
Round 3 | 7,327 | 166 | 14 | 19 | 7,488 | 7,043 | 7.2 | 94.1 | 96.0 | |
Round 4 | 7,025 | 119 | 14 | 20 | 7,138 | 6,907 | 7.0 | 96.8 | 97.0 | |
Round 5 | 6,914 | 42 | 8 | 34 | 6,930 | 6,778 | 5.9 | 97.8 | 98.0 | |
Panel 22 | Round 1 | 9,835 | 352 | 68 | 86 | 10,169 | 7,381 | 12.8 | 72.6 | 80.0 |
Round 2 | 7,371 | 166 | 19 | 11 | 7,545 | 7,039 | 8.5 | 93.3 | 95.0 | |
Round 3 | 7,071 | 100 | 12 | 19 | 7,164 | 6,808 | 6.7 | 95.0 | 96.0 | |
Round 4 | 6,815 | 91 | 13 | 18 | 6,901 | 6,672 | 6.8 | 96.7 | 97.0 | |
Round 5 | 6,670 | 35 | 7 | 12 | 6,700 | 6,584 | 5.3 | 98.3 | 98.0 | |
Panel 23 | Round 1 | 9,960 | 1,931 | 46 | 110 | 10,089 | 7,351 | 12.5 | 72.9 | 80.0 |
Round 2 | 7,387 | 106 | 14 | 15 | 7,492 | 6,960 | 8.2 | 92.9 | 95.0 | |
Round 3 | 6,987 | 102 | 11 | 18 | 7,082 | 6,703 | 6.1 | 94.6 | 96.0 | |
Round 4 | 6,704 | 74 | 10 | 12 | 6,776 | 6,522 | 6.6 | 96.2 | 97.0 | |
Round 5 | 6,503 | 34 | 4 | 5 | 6,536 | 6,383 | 5.3 | 97.7 | 98.0 | |
Round 6 | 6,398 | 19 | 10 | 18 | 6,480 | 5,120 | 4.8 | 79.0 | 96.0 | |
Panel 24 | Round 1 | 9,976 | 153 | 43 | 82 | 10,090 | 7,186 | 11.8 | 71.2 | 80.0 |
Round 2 | 7,211 | 98 | 19 | 5 | 7,323 | 6,777 | 7.9 | 92.5 | 95.0 | |
Round 3 | 6,812 | 76 | 9 | 7 | 6,890 | 6,289 | 6.0 | 91.3 | 96.0 | |
Round 4 | 6,335 | 44 | 4 | 13 | 6,370 | 5,446 | 5.1 | 85.5 | 97.0 | |
Round 5 | ||||||||||
Round 6 | ||||||||||
Panel 25 | Round 1 | 10,008 | 184 | 38 | 78 | 10,152 | 6,265 | 10.8 | 61.7 | 80.0 |
Round 2 | 5,907 | 49 | 14 | 12 | 5,958 | 4,677 | 5.5 | 78.5 | 95.0 | |
Round 3 | ||||||||||
Round 4 | ||||||||||
Round 5 |
* Figures in the table are weighted to reflect results of the interim nonresponse subsampling procedure implemented in the first round of Panel 16.
Round 1 | Round 2 | Round 3 | Round 4 | Round 5 | Round 6 | |
---|---|---|---|---|---|---|
2010 | ||||||
Panel 15 | 73.5 | 92.2 | ||||
Panel 14 | 94.9 | 96.8 | ||||
Panel 13 | 97.9 | |||||
2011 | ||||||
Panel 16 | 78.2 | 94.8 | ||||
Panel 15 | 95.4 | 97.0 | ||||
Panel 14 | 98.3 | |||||
2012 | ||||||
Panel 17 | 78.2 | 94.2 | ||||
Panel 16 | 96.1 | 97.3 | ||||
Panel 15 | 98.2 | |||||
2013 | ||||||
Panel 18 | 74.2 | 92.9 | ||||
Panel 17 | 95.2 | 95.5 | ||||
Panel 16 | 97.6 | |||||
2014 | ||||||
Panel 19 | 71.8 | 93.6 | ||||
Panel 18 | 94.5 | 97.1 | ||||
Panel 17 | 98.5 | |||||
2015 | ||||||
Panel 20 | 73.5 | 93.4 | ||||
Panel 19 | 94.7 | 96.7 | ||||
Panel 18 | 98.4 | |||||
2016 | ||||||
Panel 21 | 74.4 | 93.0 | ||||
Panel 20 | 95.1 | 96.8 | ||||
Panel 19 | 98.3 | |||||
2017 | ||||||
Panel 22 | 72.6 | 93.3 | ||||
Panel 21 | 94.1 | 96.8 | ||||
Panel 20 | 96.4 | |||||
2018 | ||||||
Panel 23 | 72.9 | 92.9 | ||||
Panel 22 | 95.0 | 96.7 | ||||
Panel 21 | 97.8 | |||||
2019 | ||||||
Panel 24 | 71.2 | 92.5 | ||||
Panel 23 | 94.6 | 96.2 | ||||
Panel 22 | 98.3 | |||||
2020 | ||||||
Panel 25 | 61.7 | 78.5 | ||||
Panel 24 | 91.3 | 85.5 | ||||
Panel 23 | 97.7 | 79.0 |
Of the data collection rounds conducted in 2020, the response rates for most rounds showed a decline when compared to the rates from 2018 and 2019. With the shift to telephone data collection in March due to the pandemic, the Round 1 response rate was seriously impacted. As a result, Round 1 interviewing continued into the fall data collection period in an effort to raise the response rate. Even with that extension, the Round 1 response rate reached only 61.7 percent (a reduction of 9.5% from the prior year). Note that the households that completed Round 1 interviews in the fall did not complete a Round 2 interview. Other rounds were differentially affected. Rounds 3 and 5 were affected less than Rounds 2 and 4 in the fall as most of the completions took place prior to the switch to telephone. However, all rounds experienced some decline. Because of this decline in response, a decision was made to extend Panels 23 and 24 to nine rounds to maintain the sample.
As would be expected when the mode of data collection for 2020 changed (except for the two and a half months prior to the pandemic), the average hours per complete across each Panel/round were lower. The biggest impact was seen in Round 1, where the average was 10.8 hours/complete compared with 12.4 hours (over the prior 4 years). A similar difference was seen in Round 2, where the average was 5.5 hours compared to 8.3 hours (over the prior 4 years).
Components of Response and Nonresponse
Table 4-5 summarizes components of nonresponse associated with the Round 1 households by Panel beginning in 2015. As the table shows, prior to 2020 the components of nonresponse other than refusals—the “not located” and “other” categories— remained relatively stable; however, in 2020, the “other” category increased 4.6 percent. The larger year-to-year changes are reflected in the percentage of refusals, whereby increases and decreases in the percentage of refusals align closely with corresponding decreases and increases in the completion rate.
2015 P20R1 |
2016 P21R1 |
2017 P22R1 |
2018 P23R1 |
2019 P24R1 |
2020 P25R1 |
|
---|---|---|---|---|---|---|
Total sample |
11,435 | 10,405 | 10,255 | 10,199 | 10,172 | 10,230 |
Out of scope (%) | 1.0 | 0.9 | 0.8 | 1.1 | 0.8 | 0.8 |
Complete (%) | 73.5 | 74.4 | 72.6 | 72.9 | 70.6 | 61.2 |
Nonresponse (%) | 26.5 | 25.6 | 27.4 | 27.1 | 28.6 | 38.0 |
Refusal (%) | 21.0 | 20.2 | 21.8 | 22.4 | 24.0 | 28.7 |
Not located (%) | 4.3 | 3.7 | 3.9 | 3.1 | 3.1 | 3.2 |
Other nonresponse (%) | 1.2 | 1.7 | 1.7 | 1.7 | 1.5 | 6.1 |
Tables 4-6 through 4-13 summarize results for additional aspects of the 2020 data collection. Because Round 1 is the most difficult of all the rounds, the presentation focuses primarily on Panel 25, Round 1.
2015 P20R1 |
2016 P21R1 |
2017 P22R1 |
2018 P23R1 |
2019 P24R1 |
2020 P25R1 |
|
---|---|---|---|---|---|---|
Original NHIS sample (N) | 10,854 | 9,851 | 9,835 | 9,839 | 9,864 | 9,866 |
Percent complete in NHIS | 80.6 | 77.6 | 81.0 | 80.4 | 84.2 | 89.3 |
Percent partial complete in NHIS | 19.4 | 22.4 | 19.0 | 19.6 | 15.8 | 10.7 |
Percent complete for NHIS completes | 75.9 | 77.3 | 75.4 | 75.4 | 73.5 | 63.5 |
Percent complete for NHIS partial completes | 63.1 | 64.8 | 62.0 | 63.6 | 60.3 | 46.8 |
Note: Figures shown are based on original NHIS sample and exclude reporting units added to the sample as “splits” and “students.”
NHIS Completion Status
Each year the MEPS sample includes a number of households classified in the NHIS as “partial completes,” in which the interviewer was able to complete part, but not all, of the full NHIS interview. Given the NHIS redesign implemented in 2018, the partial completes included in the 20 MEPS sample included some cases that completed only the roster module of the NHIS. The MEPS experience has been that for many of these NHIS cases, the difficulty experienced by the NHIS interviewer carries over to the MEPS interview: the MEPS response rate for the NHIS partial completes is substantially lower than for the NHIS completes. As noted in Chapter 1, for the 2020 sample, AHRQ repeated the step taken since 2012 of sampling the NHIS partial completes in the “White/other” category at a lower rate than the NHIS completes.
The upper portion of Table 4-6 shows the proportion of partial completes in the sample over recent years. Across all domains, the proportion of the 2020 sample classified as partial complete was significantly lower than all the previous years shown on the table. The lower portion of the table shows the persistent and substantial difference in response rate between these two components of the sample. Among the cases originally delivered from the NHIS (that is, with new reporting units discovered during the MEPS interviewing excluded from the counts), the response rate for the NHIS partial completes has been around 12 percentage points fewer or less than that for the NHIS completes. In 2020, that difference jumped up to 16.7 percentage points. In 2020, the proportions of partial completes is significantly smaller than in the previous 2 years.
Sample Domain
Table 4-7 breaks out response information for the NHIS completes and partial completes by sample domain categories, including the veterans domain introduced in Panel 24. Table 4-7, unlike Table 4-6, does include reporting units added to the sample during Round 1 data collection; it shows the differential in response rates between the NHIS partial completes and full completes persisting across all of the domains. The difference across the full 2020 sample was 15.8 percentage points, with NHIS partial completes responding at a lower rate in all domains. Within the individual domains the difference between the response rate for the NHIS completes and the NHIS partials was greatest for the White/other domain –22.4 percentage points.
Domain/NHIS Status | Net sample (N) | Complete (%) | Refusal (%) | Not located (%) | Other nonresponse (%) |
---|---|---|---|---|---|
Asian | 742 | 59.0 | 29.5 | 5.0 | 6.5 |
NHIS complete | 618 | 60.7 | 28.3 | 4.7 | 6.3 |
NHIS partial complete | 124 | 50.8 | 35.5 | 6.5 | 7.3 |
Black | 1,426 | 64.4 | 24.2 | 4.6 | 6.8 |
NHIS complete | 1,207 | 66.6 | 23.0 | 4.1 | 6.4 |
NHIS partial complete | 219 | 52.1 | 31.1 | 7.8 | 9.1 |
Hispanic | 1,878 | 64.8 | 25.5 | 4.3 | 5.4 |
NHIS complete | 1,578 | 66.6 | 23.9 | 4.0 | 5.5 |
NHIS partial complete | 300 | 55.0 | 34.0 | 6.0 | 5.0 |
White/other | 6,106 | 60.5 | 31.0 | 2.4 | 6.2 |
NHIS complete | 5,644 | 62.2 | 29.6 | 2.3 | 6.0 |
NHIS partial complete | 462 | 39.8 | 47.8 | 3.5 | 8.9 |
All groups | 10,152 | 61.7 | 28.9 | 3.2 | 6.2 |
NHIS complete | 9,047 | 63.4 | 27.6 | 3.0 | 6.0 |
NHIS partial complete | 1,105 | 47.6 | 39.4 | 5.3 | 7.7 |
Note: Includes reporting units added to sample as “splits” and “students” from original NHIS households, which were given the same “complete” or “partial complete” designation as the original household.
Refusals and Refusal Conversion
Table 4-8 summarizes the results of refusal conversion efforts by Panel. The rate of “ever refused” for RUs in Panel 25 increased to its highest level for 2020 by 2.2 percent. The percentage of converted RUs for Round 1 was also the lowest it’s been with only 12.3 percent of cases converted.
Panel | Net sample (N) | Ever refused (%) | Converted (%) | Final refusal rate (%) | Final response rate (%) |
---|---|---|---|---|---|
Panel 19 | 10,418 | 30.1 | 23.3 | 22.4 | 71.8 |
Panel 20 | 11,318 | 30.1 | 29.2 | 21.0 | 73.5 |
Panel 21 | 10,316 | 29.1 | 29.0 | 20.2 | 74.4 |
Panel 22 | 10,169 | 30.1 | 27.6 | 21.8 | 72.6 |
Panel 23 | 10,089 | 31.3 | 25.6 | 22.4 | 72.9 |
Panel 24 | 10,090 | 32.6 | 23.4 | 24.2 | 71.2 |
Panel 25 | 10,152 | 34.8 | 12.3 | 28.9 | 61.7 |
Tracing and Locating
Table 4-9 shows results of locating efforts for households that required tracking during the Round 1 field period by Panel. The percent of households that required some tracing in 2020 (11.7%) dropped 0.9 percent from 2019; the final rate of households that were not located after tracing efforts was slightly higher than 2018 and 2019. The 2020 “not located” rate was within the range of 3.0-4.3 percent for the 8-year period shown in the table.
Panel | Total sample (N) | Ever traced (%) | Not located (%) |
---|---|---|---|
Panel 19 | 10,532 | 19.5 | 4.1 |
Panel 20 | 11,435 | 14.0 | 4.3 |
Panel 21 | 10,405 | 12.8 | 3.7 |
Panel 22 | 10,228 | 13.0 | 3.9 |
Panel 23 | 10,199 | 12.7 | 3.0 |
Panel 24 | 10,172 | 12.6 | 3.0 |
Panel 25 | 10,230 | 11.7 | 3.2 |
Interview Length
Table 4-10 shows the mean length (in minutes) for interviews conducted without interruption in a single session in Panels 19-25. Timings for all of the rounds of data collection conducted in 2020 show comparable timing to the prior year, with the Round 1 having the highest time (89.0) seen across all the Panels displayed on the table.
Round | Panel 19 | Panel 20 | Panel 21 | Panel 22 | Panel 23 | Panel 24 | Panel 25 |
---|---|---|---|---|---|---|---|
Round 1 | 85.5 | 76.4 | 75.5 | 79.9 | 78.1 | 79.5 | 89.0 |
Round 2 | 92.3 | 86.3 | 85.3 | 88.8 | 88.2 | 87.0 | 89.7 |
Round 3 | 94.5 | 89.7 | 93.4 | 93.0 | 92.6 | 98.5 | |
Round 4 | 84.6 | 80.5 | 82.7 | 84.3 | 86.8 | 86.2 | |
Round 5 | 84.1 | 85.3 | 76.0 | 78.8 | 78.7 | ||
Round 6 | 88.4 |
Mean Contact Attempts Per Case
Table 4-11 shows mean contact attempts, by mode and NHIS completion status, for all cases in Round 1 of Panels 23-25. Overall, the number of contacts required per case in Panel 25 increased significantly from 2019, an overall increase of 7.1 attempts per complete that is reflected for both in-person contacts and among both the NHIS completes and partial completes. This increase is chiefly attributed to the challenges incurred by the COVID-19 pandemic and the shift to telephone interviewing. As in prior years, in Panel 25 the NHIS partial complete cases required substantially greater effort than the NHIS completes, roughly 2.9 additional in-person contacts per household.
Contact type | Panel 23, Round 1 | Panel 24, Round 1 | Panel 25, Round 1 | ||||||
---|---|---|---|---|---|---|---|---|---|
All RUs | Complete | Partial | All RUs | Complete | Partial | All RUs | Complete | Partial | |
N | 9,839 | 7,913 | 1,926 | 9,864 | 8,306 | 1,558 | 9,866 | 8,814 | 1,052 |
% of all RUs | 100 | 80.4 | 19.6 | 100 | 84.2 | 15.8 | 100 | 89.3 | 10.7 |
In-person | 6.2 | 6.0 | 7.2 | 5.5 | 5.4 | 6.3 | 2.6 | 2.5 | 2.6 |
Telephone | 1.5 | 1.4 | 1.7 | 1.3 | 1.2 | 1.6 | 9.7 | 9.5 | 11.6 |
Total | 8.2 | 7.9 | 9.5 | 7.3 | 7.1 | 8.5 | 14.4 | 14.1 | 17.0 |
During the Respondent Forms section of the MEPS CAPI interview, interviewers are prompted to ask respondents to sign the authorization forms (AFs) needed to conduct the Medical Provider Component of MEPS. Authorization forms are requested for each unique person-provider pairing identified during the interviews as a source of care to a key member of the household. Medical provider AFs are requested for physicians seen in an office-based setting; for inpatient, outpatient, or emergency room care received in a hospital; for care received from a home health agency; and for certain stays in long-term care institutions. Pharmacy AFs are requested for each pharmacy from which a household member obtained prescription medicines.
Table 4-12 shows round-by-round signing rates for the medical provider AFs for Panels 18 through 25. Signing rates dropped in 2020 as a result of the move to telephone interviewing due to the COVID-19 pandemic.
Panel/round | Authorization forms requested | Authorization forms signed | Signing rate (%) | |
---|---|---|---|---|
Panel 18 | Round 1 | 1,677 | 1,266 | 75.5 |
Round 2 | 22,714 | 18,043 | 79.4 | |
Round 3 | 20,728 | 15,827 | 76.4 | |
Round 4 | 17,092 | 13,704 | 80.2 | |
Round 5 | 15,448 | 11,796 | 76.4 | |
Panel 19 | Round 1 | 2,189 | 1,480 | 67.6 |
Round 2 | 22,671 | 17,190 | 75.8 | |
Round 3 | 20,582 | 14,534 | 70.6 | |
Round 4 | 17,102 | 13,254 | 77.5 | |
Round 5 | 15,330 | 11,425 | 74.5 | |
Panel 20 | Round 1 | 2,354 | 1,603 | 68.1 |
Round 2 | 25,334 | 18,479 | 72.9 | |
Round 3 | 22,851 | 15,862 | 69.4 | |
Round 4 | 18,234 | 14,026 | 76.9 | |
Round 5 | 16,274 | 12,100 | 74.4 | |
Panel 21 | Round 1 | 2,037 | 1,396 | 68.5 |
Round 2 | 22,984 | 17,295 | 75.2 | |
Round 3 | 20,802 | 14,898 | 71.6 | |
Round 4 | 16,487 | 13,110 | 79.5 | |
Round 5 | 20,443 | 16,247 | 79.5 | |
Panel 22 | Round 1 | 2,274 | 1,573 | 69.2 |
Round 2 | 22,913 | 17,530 | 76.5 | |
Round 3 | 26,436 | 19,496 | 73.7 | |
Round 4 | 23,249 | 18,097 | 77.8 | |
Round 5 | 17,171 | 12,168 | 70.9 | |
Panel 23 | Round 1 | 1,982 | 1,533 | 77.3 |
Round 2 | 29,576 | 21,850 | 73.9 | |
Round 3 | 23,365 | 14,575 | 62.4 | |
Round 4 | 19,220 | 13,483 | 70.2 | |
Round 5 | 17,569 | 10,903 | 62.1 | |
Round 6 | 12,701 | 8,002 | 63.0 | |
Panel 24 | Round 1 | 2,285 | 1,306 | 57.2 |
Round 2 | 24,755 | 15,865 | 64.1 | |
Round 3 | 22,657 | 11,522 | 50.9 | |
Round 4 | 14,612 | 7,716 | 52.8 | |
Panel 25 | Round 1 | 3,110 | 1,242 | 39.9 |
Round 2 | 15,259 | 7,292 | 47.8 |
Calculation of the round-by-round collection rate for the medical provider authorization forms is based on all forms requested during a round. The rates calculated for Rounds 2-5 include forms fielded but not signed in an earlier round (nonresponse) as well as forms that were fielded in an earlier round and signed, but rendered obsolete because the person had another health event with the provider after the date on which the original form was signed.
Table 4-13 shows signing rates for pharmacy authorization forms for Panels 18 through 25. Pharmacy authorization forms are requested in Rounds 2 through 5, with follow-up for nonresponse in subsequent rounds similar to that for medical provider authorization forms. The signing rates for the pharmacy authorization forms have generally shown a pattern of decline since Panel 18; however, there are minor fluctuations by Panel. The drop in signing rates for 2020 can be attributed to the move to telephone interviewing as a result of the COVID-19 pandemic.
Authorization form (AF) signing rates dropped when MEPS had to move to telephone interviewing due to the COVID-19 pandemic. Not being able to review the AFs with the respondents in-person and obtain signatures at that time, and relying on respondents to mail their signed forms to MEPS itself, had a negative impact on the return rate for authorization forms. To address this shortfall during the fall 2020 data collection period, interviewers made up to three follow-up calls to work with the household to sign the AFs and schedule a contactless in-person retrieval.
Additionally, MEPS decided to generate a new set of AFs if the requested AFs had not been received and processed within 21 or more days from the interview completion date. These forms were printed each week and mailed from the home office to respondents. A cover letter explained the need for signed AFs and provided instructions for signing them. Respondents could return the forms in a BRE that was included in the mailing or arrange for contactless pick-up by an interviewer.
Panel/round | Authorization forms requested | Authorization forms signed | Signing rate (%) | |
---|---|---|---|---|
Panel 18 | Round 2 | 10,977 | 8,755 | 79.8 |
Round 3 | 9,757 | 7,573 | 77.6 | |
Round 4 | 8,526 | 6,858 | 80.4 | |
Round 5 | 7,918 | 6,173 | 78.0 | |
Panel 19 | Round 2 | 10,749 | 8,261 | 76.9 |
Round 3 | 9,618 | 6,902 | 71.8 | |
Round 4 | 8,557 | 6,579 | 76.9 | |
Round 5 | 7,767 | 5,905 | 76.0 | |
Panel 20 | Round 2 | 12,074 | 8,796 | 72.9 |
Round 3 | 10,577 | 7,432 | 70.3 | |
Round 4 | 9,099 | 6,945 | 76.3 | |
Round 5 | 8,312 | 6,339 | 76.3 | |
Panel 21 | Round 2 | 10,783 | 7,985 | 74.1 |
Round 3 | 9,540 | 6,847 | 71.8 | |
Round 4 | 8,172 | 6,387 | 78.2 | |
Round 5 | 6,684 | 5,336 | 79.8 | |
Panel 22 | Round 2 | 10,510 | 7,919 | 75.4 |
Round 3 | 8,053 | 5,953 | 73.9 | |
Round 4 | 7,284 | 5,670 | 77.8 | |
Round 5 | 8,048 | 5,726 | 71.1 | |
Panel 23 | Round 2 | 8,834 | 6,514 | 73.8 |
Round 3 | 9,614 | 6,205 | 64.5 | |
Round 4 | 8,486 | 5,900 | 69.5 | |
Round 5 | 8,067 | 5,101 | 63.2 | |
Round 6 | 5,668 | 3,418 | 60.3 | |
Panel 24 | Round 2 | 10,265 | 6,676 | 65.0 |
Round 3 | 9,096 | 4,831 | 53.1 | |
Round 4 | 7,100 | 3,636 | 51.2 | |
Panel 25 | Round 2 | 6,783 | 3,180 | 46.9 |
Self-administered questionnaires (SAQs) are requested from key adult household members in Rounds 2 and 4. Forms that are not collected in Rounds 2 and 4 are requested again in Rounds 3 and 5. In fall 2020, SAQs were requested from Panel 23 Round 6 respondents as well. Table 4-14 shows both the round-specific response rates and the combined rates after the follow-up round is completed. The response rate after follow-up remained in the mid-80 percent range until 2018 when it dropped to the mid-70 percent range. Overall procedures for the distribution and collection of hard-copy materials have not changed with the exception of additional concentrated follow-up. Additional evaluation is underway to understand and attempt to improve the hard-copy rates. Follow-up for the 2019 data year occurred in the first half of 2020, which was impacted by the COVID-19 pandemic and move to telephone interviewing. However, the combined rates for 2019 remained in line with those of previous years. The response rates for the initial SAQ requests in Panel 23 Round 6, Panel 24 Round 4, and Panel 25 Round 2 were significantly lower than the rate for the initial request in prior years as a result of telephone interviewing due to COVID-19.
Panel/round | SAQs requested | SAQs completed | SAQs refused | Other nonresponse | Response rate (%) | |
---|---|---|---|---|---|---|
Panel 19 | Round 2 | 12,664 | 10,047 | 1,014 | 1,603 | 79.3 |
Round 3 | 2,306 | 1,050 | 694 | 615 | 44.5 | |
Combined, 2014 | 12,664 | 11,097 | 1,708 | 2,218 | 87.6 | |
Round 4 | 11,782 | 9,542 | 1,047 | 1,175 | 81.0 | |
Round 5 | 2,131 | 894 | 822 | 414 | 42.0 | |
Combined, 2015 | 11,782 | 10,436 | 1,869 | 1,589 | 88.6 | |
Panel 20 | Round 2 | 14,077 | 10,885 | 1,223 | 1,966 | 77.3 |
Round 3 | 2,899 | 1,329 | 921 | 649 | 45.8 | |
Combined, 2015 | 14,077 | 12,214 | 2,144 | 2,615 | 86.8 | |
Round 4 | 13,068 | 10,572 | 1,127 | 1,371 | 80.9 | |
Round 5 | 2,262 | 1,001 | 891 | 370 | 44.3 | |
Combined, 2016 | 13,068 | 11,573 | 2,018 | 1,741 | 88.6 | |
Panel 21 | Round 2 | 13,143 | 10,212 | 1,170 | 1,761 | 77.7 |
Round 3 | 2,585 | 1,123 | 893 | 569 | 43.4 | |
Combined, 2016 | 13,143 | 11,335 | 2,063 | 2,330 | 86.2 | |
Round 4 | 12,021 | 9,966 | 1,149 | 906 | 82.9 | |
Round 5 | 2,078 | 834 | 884 | 360 | 40.1 | |
Combined, 2017 | 12,021 | 10,800 | 2,033 | 1,266 | 89.8 | |
Panel 22 | Round 2 | 12,304 | 9,929 | 1,086 | 1,289 | 80.7 |
Round 3 | 2,287 | 840 | 749 | 698 | 36.7 | |
Combined, 2017 | 12,304 | 10,769 | 1,835 | 1,987 | 87.5 | |
Round 4 | 11,333 | 8,341 | 1,159 | 1,833 | 73.6 | |
Round 5 | 2,090 | 811 | 896 | 383 | 38.8 | |
Combined, 2018 | 11,333 | 9,152 | 2,055 | 2,216 | 80.8 | |
Panel 23 | Round 2 | 12,349 | 8,711 | 1,364 | 1,289 | 70.5 |
Round 3 | 2,364 | 819 | 907 | 638 | 34.6 | |
Combined, 2018 | 12,349 | 9,530 | 2,271 | 1,927 | 77.2 | |
Round 4 | 11,290 | 8,554 | 1,515 | 1,221 | 75.8 | |
Round 5 | 2,711 | 983 | 923 | 805 | 36.3 | |
Combined, 2019 | 11,290 | 9,537 | 2,438 | 2,026 | 84.5 | |
Round 6 | 8,537 | 4,732 | 682 | 3,123 | 55.4 | |
Panel 24 | Round 2 | 12,027 | 8,726 | 1,641 | 1,660 | 72.6 |
Round 3 | 2,810 | 860 | 832 | 1,118 | 30.6 | |
Combined, 2019 | 12,027 | 9,586 | 2,473 | 2,778 | 79.7 | |
Round 4 | 9,257 | 4,247 | 786 | 4,224 | 45.9 | |
Panel 25 | Round 2 | 8,109 | 3,555 | 529 | 4,025 | 43.8 |
In Rounds 3 and 5, key adult household members who are reported as having been diagnosed with diabetes are asked to complete a short self-administered questionnaire, the Diabetes Care Supplement (DCS). Forms not completed for pickup at the time of the interviewer’s visit are followed up by telephone in the latter stages of Rounds 3 and 5, but unlike the SAQ, there is no follow-up in the subsequent round for forms not collected in the round when first requested. Response rates for the DCS for Panels 18 through 24 are shown in Table 4-15. Completion rates for the DCS have a modest but relatively steady decline over time; however, the DCS rate dropped significantly in 2018. A further drop was seen in 2020 that can be attributed to impeded follow-up efforts resulting from the COVID-19 pandemic and move to telephone interviewing.
Panel/round | DCSs requested | DCSs completed | Response rate (%) | |
---|---|---|---|---|
Panel 18 | Round 3 | 1,362 | 1,182 | 86.8 |
Round 5 | 1,342 | 1,187 | 88.5 | |
Panel 19 | Round 3 | 1,272 | 1,124 | 88.4 |
Round 5 | 1,316 | 1,144 | 87.2 | |
Panel 20 | Round 3 | 1,412 | 1,190 | 84.5 |
Round 5 | 1,3862 | 1,174 | 84.9 | |
Panel 21 | Round 3 | 1,422 | 1,170 | 82.5 |
Round 5 | 1,481 | 1,123 | 75.8 | |
Panel 22 | Round 3 | 1,453 | 1,074 | 73.9 |
Round 5 | 1,348 | 1,018 | 75.5 | |
Panel 23 | Round 3 | 1,464 | 1,101 | 75.2 |
Round 5 | 1,350 | 933 | 69.1 | |
Panel 24 | Round 3 | 1,350 | 843 | 62.4 |
During the spring 2020 field collection period, MEPS requested households to submit documents that provide detailed information about cost-sharing between individuals and insurance companies. The policy information being requested gives additional insight into details about the health insurance plan, including the plan’s deductibles, maximum out-of-pocket limits, costs for different services such as primary care visits versus visits to specialists or visits for diagnostic tests, and the cost difference between generic and brand drugs. Depending on the type of insurance reported, there were two main documents of interest: An Evidence of Coverage (EOC), or Summary of Benefits and Coverage (SBC). Cost-sharing documents were requested from Panel 25 Round 1 and Panel 24 Round 3 households that reported private insurance, Medicare Advantage, or Medicare Part D Prescription Drug plans during the MEPS interview. For each plan where cost-sharing documents were requested and received at the home office, the respondent received $30. This payment was sent directly to the respondent from the home office.
The CAPI instrument displayed the eligible plans for document collection. Westat developed four unique hard-copy guides, which we refer to as “protocol folders.” Each type of protocol folder provided information and instructions on how to obtain cost-sharing documents specifically geared toward the type of eligible health insurance coverage reported during the interview. Having four protocol versions targeted to different insurance types made it easier for MEPS households to get the correct cost-sharing document for their insurance plan. For each eligible plan, CAPI identified the protocol folder to be provided and field interviewers were required to review the protocol folder with the respondent.
Although field interviewers were not expected to collect the cost-sharing documents at the end of each interview, they were trained to emphasize to the respondent or policyholder that the requested documentation related to their plan should have a coverage period that included the current interview date.
MEPS households were offered three options for submitting cost-sharing documents: (1) returning them to the MEPS interviewer, (2) uploading an electronic version to a specified website (www.MEPSDOCS.org), or (3) sending them by mail.
Interviewers were asked to make up to three follow-up calls to respondents to check on their progress obtaining insurance documents and address any questions they may have. A follow-up call script was provided for these calls and the outcome of each call was recorded in an EROC.
All cost-sharing documents received by MEPS were receipted in the MEPS Receipt Control system. Electronic documents uploaded by the respondent were stored on a server at Westat, and hard-copy documents received by MEPS were scanned by MEPS Receipt Control staff and then uploaded to the same server. This enabled the receipt system to access all of the documents the same way. The $30 payment was sent to the respondent after a document was uploaded either by the respondent or by MEPS Receipt Control staff for scanned hard-copy documents.
During fall 2020 data collection, a reminder letter was sent to respondents with an outstanding policy request from the spring rounds. A copy of the reminder letter for each outstanding policy was included in the RU folder and each outstanding policy was listed on the face sheet. CAPI did not prompt for the outstanding documents; rather, interviewers were asked to refer to the reminder letter and face sheet to review the request with their respondents and encourage them to send the documents to MEPS.
Interviewer performance was monitored through validation case review using GPS, CARI, and telephone interviews. The purpose of validation is to verify that the correct individual was contacted for the interview and that the interview was conducted according to MEPS-approved procedures.
Generally, all completed cases are validated by first examining the GPS data stored and encrypted on the laptop. Then, if the case could not be properly validated due to missing data or the GPS information could not be verified to show the interviewer at the respondent address or another documented location at the time of the interview, the case was then reviewed in the CARI system. However, beginning in mid-March of 2020, the majority of cases were completed by telephone due to the COVID-19 pandemic. Therefore GPS data could not be relied on for validation and CARI review was the main mode of validation in 2020. If a case could not be validated in CARI due to poor quality or missing CARI data, the case was referred for telephone validation. All interviews completed in less than 30 minutes were referred for telephone validation.
In the spring 2020 rounds, 17,947 (97% of completed cases) were validated. A minimum of 41 percent of an interviewer’s completes were validated, with an average of 95 percent of each interviewer’s completes validated. In the fall 2020 rounds, CARI and telephone validation was performed due to over 98 percent of the cases being completed by telephone. This resulted in fewer cases being validated in the fall. Nonetheless, 10,590 completed cases, or 67.6 percent of completed cases, were validated and at least 15 percent of completes were validated for each interviewer.
In addition to validating cases, MEPS field supervisors and managers conduct observations as part of a comprehensive mentoring process. Generally, MEPS uses technical solutions in place of in-person observations; however, there are specific needs met by specialized observation. As much as possible, observations are conducted in the early weeks of data collection so that problems can be detected and corrected as quickly as possible and interviewers given feedback on ways to improve specific interviewing skills. While CARI offers a high-quality portal for evaluating interviewers on question administration, observations, particularly of newly hired staff, allow for assessment of the full range of interviewer skills including respondent contact, trip planning, gaining cooperation, and interviewer-respondent interactions that cannot be captured through CARI and other report mechanisms. In addition, the observer serves as an on-site resource in situations where remedial training is necessary. Observation forms are processed and reviewed at the home office to determine the need for individual and field-wide follow-up on specific skills. In 2020, 20 observations were conducted prior to March 13, at which time observations could no longer be done due to the COVID-19 pandemic.
To comply with the requirement of reporting incidents involving loss or theft of hard-copy materials with respondent personally identifiable information (PII) or laptops, field staff continued to use an automated loss reporting system to report incidents. As before, reported incidents were subsequently tracked through the use of a documentation log that was provided to AHRQ annually. A security incident report was also filed for each confirmed incident with the Westat IRB.
A total of 26 incidents of lost or stolen laptops or hard-copy PII were reported in 2020. Of those reported incidents, five involved MEPS laptops that were reported stolen or lost. All but one laptop was recovered. The password-protected laptops were shut down at the time of the loss. Since MEPS laptops are full disc encrypted, respondent identity was not at risk.
A total of twenty-six incidents reported suspected or confirmed loss of hard-copy materials with respondent PII loss or breach of confidentiality. Eight of the twenty-six reported hard-copy losses were located intact and uncompromised. Following extensive searches, no documents were recovered in the other eighteen reported losses. Included among the PII hard-copy losses were authorization forms, Self-Administered Questionnaires (SAQs), Preventive Care Self-Administered Questionnaires (PSAQs) Diabetes Care Surveys, and Policy Booklets. All households with PII loss were notified. The AHRQ Information Security Manager alerts the HHS Privacy Incident Response Team (PIRT) of all MEPS-reported PII incidents.
The home office supports the data collection effort in several important ways. One phase of activity supports the launch of each new round of data collection; another phase supports the field operation while data collection is in progress. These two phases of activity are described in this chapter.
Hard-copy materials were assembled prior to data collection for cases being fielded in Rounds 2 through 6. Clerical staff created an RU folder for each case being fielded and inserted any authorization forms and SAQs that were printed for the case. There are no hard-copy case materials generated for Round 1 cases so RU folders were not created prior to data collection for Round 1 cases.
Supervisors received a Supervisor Assignment Log listing all of the cases to be released in their region for each wave of cases. For the first wave of each round, supervisors used this log to assign cases to their interviewers. They entered the ID of the interviewer to be assigned each case and sent the log back to the home office. Home office staff then shipped the RU folders directly to the interviewers. A file with the assignments was also sent to programming staff to make the electronic assignments in the BFOS field management system.
For later waves, the prepared RU folders were sent to the field supervisors, who made the electronic assignments in their Supervisor Management System (SMS) and shipped the hard-copy materials to their interviewers.
Prior to the start of data collection for each period, interviewers connected remotely to the home office to download the CAPI software update for the upcoming rounds and received a home study training package to prepare them for interviewing. Field interviewers also received a replenishment of supplies at the start of the rounds.
Advance mailings to all respondent households were prepared and mailed by the home office staff. Addresses were first standardized and sent through the National Change of Address (NCOA) database to obtain the most current addresses for mailing. Any mail returned as undeliverable was recorded and the appropriate supervisor was notified. Requests to re-mail the Round 1 advance package to households who reported not receiving it were prepared and mailed by home office staff.
Respondent Contacts. Respondent contacts are an important component of home office support for the MEPS data collection effort. Printed materials mailed to respondents contain an email address and toll-free telephone number that respondents can use to contact the project with questions, with requests to make or to cancel interview appointments, or to decline participation in the study. Home office staff received and initiated the response to all respondent contacts. They forward information received from respondent calls to the field supervisors, who initiate the appropriate follow-up and inform the home office of the results of their follow-up within 24 hours of notification. Table 5-1 shows the number and percent of RUs who made calls to the respondent hotline in the spring and fall rounds of 2016–2020. There was a significantly higher percentage of calls to the hotline in 2020.
Original sample size | Number of calls | Calls as a percent of sample size | |
---|---|---|---|
Round 1 | |||
2016 – Panel 21 Round 1 | 9,851 | 301 | 3.1% |
2017 – Panel 22 Round 1 | 9,835 | 346 | 3.5% |
2018 – Panel 23 Round 1 | 9,846 | 383 | 3.9% |
2019 – Panel 24 Round 1 | 9,864 | 343 | 3.5% |
2020 – Panel 25 Round 1 | 9,880 | 586 | 5.9% |
Rounds 3/5 | |||
2016 – Panel 19 Round 5/Panel 20 Round 3 | 14,844 | 547 | 3.7% |
2017 – Panel 20 Round 5/Panel 21 Round 3 | 14,939 | 533 | 3.6% |
2018 – Panel 21 Round 5/Panel 22 Round 3 | 13,922 | 467 | 3.4% |
2019 – Panel 22 Round 5/Panel 23 Round 3 | 13,594 | 486 | 3.6% |
2020 – Panel 23 Round 5/Panel 24 Round 3 | 13,241 | 592 | 4.5% |
Rounds 2/4 | |||
2016 – Panel 20 Round 4/Panel 21 Round 2 | 15,392 | 605 | 3.9% |
2017 – Panel 21 Round 4/Panel 22 Round 2 | 14,395 | 518 | 3.6% |
2018 – Panel 22 Round 4/Panel 23 Round 2 | 14,123 | 524 | 3.7% |
2019 – Panel 23 Round 4/Panel 24 Round 2 | 13,844 | 531 | 3.8% |
2020 – Panel 23 Round 6/Panel 24 Round 4/Panel 25 Round 2 | 18,480 | 1,163 | 6.3% |
Table 5-2 shows the number and types of calls received on the respondent hotline during 2019 and 2020. As in prior years, a substantial portion of the Round 1 calls were from refusals, with a much smaller proportion of refusals and a higher proportion of appointment requests in the later rounds.
Reason for call | Spring 2019(Panel 24 Round 1, Panel 23 Round 3, Panel 22 Round 5) | Fall 2019(Panel 24 Round 2, Panel 23 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 5 | 1.5 | 36 | 7.4 | 30 | 5.6 |
Appointment | 59 | 17.2 | 328 | 67.5 | 344 | 64.8 |
Request callback | 39 | 11.4 | 56 | 11.5 | 56 | 10.5 |
No message | 2 | 0.6 | 4 | 0.8 | 7 | 1.3 |
Other | 2 | 0.6 | 4 | 0.8 | 0 | 0.0 |
Proxy needed | 2 | 0.6 | 6 | 1.2 | 11 | 2.1 |
Request SAQ help | 0 | 0.0 | 2 | 0.4 | 5 | 0.9 |
SAQ refusal | 0 | 0.0 | 48 | 9.9 | 0 | 0.0 |
Special needs | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Refusal | 185 | 53.9 | 0 | 0.0 | 78 | 14.7 |
Willing to participate | 49 | 14.3 | 2 | 0.4 | 0 | 0.0 |
Total | 353 | 486 | 531 |
Reason for call | Spring 2020(Panel 25 Round 1, Panel 24 Round 3, Panel 23 Round 5) | Fall 2020 (Panel 25 Round 2, Panel 24 Round 4, Panel 23, Round 6) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2, 4, and 6 | ||||
N | % | N | % | N | % | |
Address/telephone change | 5 | 0.9 | 37 | 6.3 | 28 | 2.4 |
Appointment | 142 | 24.2 | 332 | 56.1 | 278 | 23.9 |
Request callback | 102 | 17.4 | 121 | 20.4 | 276 | 23.7 |
No message | 22 | 3.8 | 18 | 3.0 | 60 | 5.2 |
Other | 2 | 0.3 | 5 | 0.8 | 5 | 0.4 |
Proxy needed | 6 | 1.0 | 3 | 0.5 | 10 | 0.9 |
Request SAQ help | 0 | 0.0 | 1 | 0.2 | 35 | 3.0 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Special needs | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Refusal | 209 | 35.7 | 62 | 10.5 | 203 | 17.5 |
Willing to participate | 98 | 16.7 | 13 | 2.2 | 266 | 22.9 |
Total | 586 | 592 | 1,163 |
Monitoring Production. Home office staff monitored production, cost, and DQ, and they provided reports and feedback to field managers and supervisors for review and follow-up. Each week they generated and distributed reports to AHRQ showing weekly and cumulative figures on field production, response rate, and costs.
Home Office Support. Refusal letters were generated and mailed by home office staff as requested by the field. Home office staff also responded to supply requests from the field, replenishing interviewer and supervisor stocks of materials as needed.
Receipt Control. As interviewers completed cases, they transmitted the data electronically and shipped the case folders containing any hard-copy documents to the home office receipt operation. Interviewers shipped all material containing PII via Federal Express, which facilitates tracking of late or lost shipments. When preparing a shipment to the home office receipt department, interviewers used the Ship to Receipt module to indicate exactly what materials were included in the package and recorded the FedEx tracking number. This information was sent directly to the receipt control system so it was known what materials were expected. For interviews completed by phone due to the COVID-19 pandemic and for which contactless pick-up of hard-copy documents could not be arranged, interviewers provided a BRE for the respondent to send their documents directly to the home office. Contents of the cases received at the home office were reviewed and recorded in the receipt system. Authorization forms were edited for completeness and scanned into an image database. When a problem was found in an authorization form, the problem was documented and feedback was sent to the field supervisor to review with the interviewer. All self-administered questionnaires, including SAQs/PSAQs, and DCSs, were receipted and sent out for TeleForm scanning.
Helpdesk Support. The MEPS CAPI Helpdesk again provided technical support for field interviewing activities during 2020. Helpdesk staff were available 7 days a week to help field staff resolve CAPI, Field Management System, transmission, and laptop problems. Incoming calls were documented for follow-up as needed to resolve individual issues and to identify issues reported by multiple interviewers. The CAPI Helpdesk serves as the coordinating point for tracking and shipping all field laptops, monitoring field laptop assignment, and coordinating laptop repair.
This chapter briefly describes the activities that supported Westat’s data delivery work during the year and identifies the principal files related to data year 2018 delivered in 2020.
Adhering to the schedule for delivery of the key MEPS public use files is of paramount importance to the project. Throughout 2020, data processing activities to support the major file deliveries for the year proceeded simultaneously along several different delivery paths, with activity focused separately on each of the Panels for the annual Full Year Files. As in past years, the project used a set of comprehensive data delivery schedules to guide management of the effort. The schedules integrate key dates for the data collection, data capture, coding, editing and imputation, weights construction, and documentation production tasks. These schedules provide a framework for assessing the potential impact of proposed changes at the start of each processing cycle and for coordinating the succession of processes that comprise the delivery effort.
The data quality control (DQC) system consists of both a consolidated database that preserves data as returned from the field, and a DQC-specific database that shows the current values of data following any required updates. DQC technicians access the data through a secure portal. Technicians review and edit the data using the Blaise database model that is used in the field for data collection. All DQC work occurs at a “case” level. The DQC system automatically creates a unique “issue” for each instance of text entered as a comment and includes the comment category selected by the field interviewer associated with the text entry. As cases are loaded into DQC, each comment and category are checked by a Natural Language Processing (NLP) algorithm that identifies the most likely category. During processing, data technicians have the opportunity to accept or update this category. Technicians then follow standardized procedures for data review and editing based on the comment category.
The DQC system also runs a series of programmatic checks and assigns a new “issue” for each instance that triggers a consistency or edit check. These checks are designed to ensure that data changed during editing conform fully to the rules of the CAPI instrument before the data are released. During spring 2020, 12.3 percent of cases received from the field included a comment (Table 6-1). Cases with any issue, a field comment or consistency check, totaled 28 percent. For fall 2020, 13.3?percent of cases received from the field included a comment while cases with any issue totaled just 13 percent.
Field period | Cases processed | Cases with at least 1 comment | % cases with comments | Cases with at least 1 issue | % cases with issues | Not actionable (comments) | % NA comments |
---|---|---|---|---|---|---|---|
Spring 2020 | 18,531 | 2,280 | 12.3 | 5,191 | 28.0 | 1,663 | 52.7 |
Fall 2020 | 15,688 | 2,081 | 13.3 | 2,057 | 13.1 | 1,772 | 53.4 |
Field interviewers must select one of 10 categories for each comment text string; after selecting a category, CAPI provides category-specific guidance on information to include in the comment (e.g., RU member name, event date, etc.). They receive training to help identify the most meaningful category and avoid over-use of the category “Other.” Table 6-2 shows the number of comments made in each category as assigned by the NLP algorithm and confirmed by the data technicians.
Total number of comments by category | # | % |
---|---|---|
1. RU/RU Member | 334 | 5.1 |
2. RU Member Refusal | 106 | 1.6 |
3. Condition | 113 | 2.0 |
4. Health Care Events | 3,405 | 52.1 |
5. Glasses/Contact Lenses | 43 | 0.7 |
6. Other Medical Expenses | 88 | 1.3 |
7. Prescribed Medicines | 806 | 12.3 |
8. Employment | 448 | 6.9 |
9. Health Insurance | 675 | 10.3 |
10. Other | 496 | 7.6 |
Total | 6,534 |
Transformation is the process of extracting data from the Blaise data models optimized for data collection, and writing them to the data exchange format (DEx) required by the data delivery teams. The transformation has two logical activities: First is transforming the structure of the data from data collection to DEx and then transforming the format of the data from Blaise to Oracle. The resulting data, now stored in Oracle using the DEx structure, serves as input to the analytic editing, variable construction, public use files (PUFs), and other file deliveries. The goal is to dislocate the delivery activities as little as possible in order to provide data of the highest quality as efficiently as possible.
As shown in Figure 6-1, data transformation has four distinct layers. The metadata layer contains all the variable definitions—including names, tables, or segments or blocks—and transformation logic, sometimes known as plain-language transformation specifications. The analytic group leads at Westat are typically responsible for the metadata and the transformation logic.
Figure 6-1. Blaise to DEx transformation
Based on the metadata, two specifications are developed. The first describes the DEx structure using a formal schema, which is expressed as a set of SQL statements to create the empty Oracle DEx database. The second specification is the detailed transformation specification. Each variable is assigned to a set of similar variables called a transformation class. A unique transformation class is defined by the information needed to specify the transformation. For instance, some variables simply need to be copied to an appropriate location in the DEx. These are known as passthrough variables and belong to the Passthrough class. Code All That Apply variables are transformed based on the value selected by the interviewer, so the specification requires an additional DEx variable for each possible value. Code All That Apply is another transformation class. All of the classes are developed through discussions with AHRQ and are sent to AHRQ for approval.
The third layer is the transformation (or programming) layer. Using the specifications just described, the data are read from the Blaise database in the data collection structure, the transformation logic is applied, and a data file for each DEx table is written. The DEx tables are generally identical to the legacy Cheshire segments, such as BASE, HOME, or PERS. This set of intermediate data files is known as pre-DEx and has the same structure as the DEx database, but all files are in the Blaise format. Next, the format is transformed from the Blaise format to Oracle, writing to the Single-Round Database (SRD). The single-round structure is necessary because the data collection instrument does not contain all data for all rounds for a given case; rather, only the data required to field the case in that specific round are included. The SRD data are then merged into the existing data, yielding a cumulative Multi-Round Database (MRD).
The final layer relates the different databases to selected key deliverables. This layer is intentionally general. For example, while the MRD is the source for the PUF deliveries, there are many additional steps to edit the data, construct variables, and deliver a data file and codebook.
TeleForm, a commercial off-the-shelf (COTS) software system for intelligent data capture and image processing, was used in 2020 to capture data collected in the DCS, and the SAQ. TeleForm software reads the form image files and extracts data according to the project specifications. Supporting software checks the data for conformity with project specifications and flags data values that violate the validation rules for review and resolution.
Coding refers to the process of converting data items collected in text format to pre-specified numeric codes. The plan for the 2020 coding effort (for items collected during the calendar years 2018 and early 2019) was described in Deliverables 20.506, .507, and .508. For the MEPS-HC, five types of information require coding:
Condition and Prescribed Medicine Coding
In 2020, coding was performed on the conditions and prescribed medicine text strings reported by household respondents for calendar year 2019. An automated system enabled coders to easily search for and assign the appropriate ICD-10-CM code (for conditions) or Generic Product Identifier (GPI) code (for medicines). The system supports the verifier’s review of all codes and, as needed, correction of the coder’s initial decision. For the prescribed medicine coding, a pharmacist provided a further review of text strings questioned by the verifier, uncodable text strings, foreign medicines, and compound drugs. All coding actions are tracked in the system and error rates calculated weekly. Both the condition and prescribed medicine coding efforts were staffed by three coders.
During the 2020 coding cycle, coding managers continued to refine a number of new and revised procedures and processes implemented for the coding of 2018 data in 2019. These revisions were a result of many months of collaboration between AHRQ and Westat in evaluating all aspects of the coding processes for household reported conditions, prescribed medicines, and sources of payment, including updating and maintaining the authority tables and the development of tools and resource documents to facilitate the execution of these tasks. Additionally, Westat deployed a new web-based coding system for condition and prescribed medicine coding to replace the Access Database previously used. The new system better supports downstream processing activities and aligns with other web-based systems used across other components of MEPS. All aspects of coding work is supported by a number of scheduled quality control checks before, during, and after each coding cycle.
In 2020, medical conditions were coded to include the greatest specificity indicated by the text string. The fully specified ICD-10 code is needed to accurately match to the CCS. A total of 10,112 unique strings were manually coded and a 3-year authority was constructed with AHRQ-approved code assignments. The overall error rate for coders was 1 percent. The contractual error rate goal is 2 percent.
Prescription medicine text strings for data year 2019 were coded to the set of GPI codes, associated with the Master Drug Data Base (MDDB) maintained by Medi-Span, a part of Wolters Kluwer. The codes characterize medicines by therapeutic class, form, and dosage. To augment the assignment of codes to less specified and ambiguous text strings, AHRQ developed procedures for assigning partial GPI codes and higher level drug categories that were implemented in 2017 and continued through the 2019 coding cycle. AHRQ also developed a set of exact and inexact matching programs to reduce the number of prescribed medicine strings sent for manual coding. Westat’s implementation of these matching programs reduces the number of prescribed medicine text strings sent for manual coding by approximately 40 percent each year. The matching programs are reviewed and approved each year. A total of 9,016 strings were manually coded from 2019 data. In a process similar to condition text strings, the prescription medicine text strings undergo two rounds of unduplication to identify the unique strings to be coded. AHRQ’s exact and inexact matching programs are then run to further reduce the number of strings to be coded. The overall coding error rate (across all coders) was 1 percent, 1 percent lower than the contractual goal of 2 percent. As with conditions, all prescription text strings/codes were reviewed by a verifier, with additional review of selected strings provided by a pharmacist.
Source of Payment Coding
Source of payment information (SOP) is collected in both the household and the medical provider components. In the HC charge payment section of the CAPI instrument, the names of the sources of payment are collected in three places: when the bill was paid by a source identified in response to a direct question about payment (REIMNAM); when the bill was sent to a source other than the respondent and the respondent names that source (WHOBILL#); and in response to a question about a direct payment source for prescription medicines (SRCNAME). The responses are coded to one of the source of payment options in which health care expenditures are reported in the MEPS public use files. These payment sources include:
The SOP Coding Guidelines is a manual updated each year before the start of the annual coding cycle, submitted for AHRQ approval, and distributed to the coders. Health insurance showcards and data from the health insurance planfill file for CAPI is available to coders as resource materials. Since the Medical Provider Component (MPC) of MEPS uses the same set of source of payment codes as the Household Component, coding rules and decisions are coordinated with the MPC contractor to ensure consistency in the coding. Before the start of the coding cycle, Westat compares RTI’s authority tables with Westat’s to identify any inconsistencies. AHRQ adjudicates these to ensure the authority tables from each contractor are aligned.
Each year, the source of payment text strings extracted from the reference year data is matched to a historical file of previously coded SOP text strings to create a file of matched strings with suggested or “matched” codes. These match-coded strings are reviewed by coders and verified or modified as needed. This review is required because insurance companies change their product lines and coverage offerings very frequently, and as a result, the source of payment code for a given text string (e.g., the name of an insurance company or plan) can change from year to year. For example, from one year to the next an insurer or insurance product may participate in or drop out of state exchanges; may offer Part D or dental or vision insurance or may drop it; may add Medicare Advantage plans in addition to Medicaid HMOs; or may gain or lose state contracts as Medicaid service providers. As a result of these changes, the appropriate code for a company or specific plan may also change from year to year. Strings that do not match to a string in the history table are researched and have an appropriate SOP code assigned by coding staff.
SOP coding during 2020 was for the payment sources reported for 2019 events. For cases when the bill was paid by a source identified in response to a direct question about payment (REIMNAM), a total of 2,663 previously coded source of payment text strings were reviewed and updated as needed. After un-duplication of the strings reported for 2019, coders reviewed and coded 1,242 strings. If the bill was sent to a source other than the respondent and the respondent names that source (WHOBILL#), coders reviewed and coded 2,852 strings. For text strings reported as direct payers for prescription medicine (SRCNAME), 609 new text strings were reviewed and coded by coders.
Industry and Occupation Coding
Industry and Occupation coding is performed for MEPS by the Census Bureau using the Census Bureau’s Demographic Surveys Division’s (DSD’s) computer-assisted industry and occupation (I&O) codes, which can be cross-walked to the 2007 North American Industrial Classification (NAIC) coding system, and the 2010 Standard Occupational Classifications (SOC). The codes characterize the jobs reported by household respondents and are released annually on the FY JOBS file. During 2020, 13,102 jobs were coded for the 2019 JOBS file.
During the 2020 coding cycle, AHRQ expanded the scope of work to include coding data year 2019 text strings to multiple versions of the NAICS and SOCS; specifically, the new data runs included 2007 NAICS and 2000 SOCS, 2012 NAICS and 2010 SOCS, and 2017 NAICS and 2018 SOCS. This was a one-time request.
GEO Coding
The Westat Geographic Information Systems (GIS) division GEO-codes household addresses, assigning the latitude and longitude coordinates, as well as other variables such as county and state Federal Information Processing Standards (FIPS) codes, Metropolitan Statistical Area (MSA) status, Designated Market Area, Census Place, and county. RU-level data are expanded to the person level and delivered to AHRQ as part of the set of “Master Files” sent yearly. These data are not included in a PUF, but some variables are used for the FY weights processing.
During the calendar year 2020 coding cycle, 16,824 unique address records for full year reporting units were processed.
The primary objective of MEPS is to produce a series of data files for public release each calendar year. The inter-round processing, editing, and variable construction tasks all serve to prepare these public use files. Each file addresses one or more aspects of the U.S. civilian non-institutional population’s access to, use of, and payments for health care.
The Oracle system has a separate database for each Panel/year combination. For a Panel’s “second year” database, the static and data delivery tables are also brought forward from the “first year” database for referencing to assure longitudinal continuity in the PUFs.
Due to the pandemic, Panel 23 is being extended through Round 9. To accommodate this, there is a “third year” database for Panel 23. Thus, three databases were created in fall of 2020 to represent the 2020 data collected to date. The remainder of this section focuses on the 2019 databases.
After the data are in the Oracle delivery database, each analytical team performs basic edit checks on the data to begin the process. These edits ensure the data conform to the CAPI instrument’s flow as well as to AHRQ’s analytical needs. These edits can be run in SAS, using SAS data sets extracted from the delivery database, or in SQL directly on the delivery database. Problems identified through the basic edits process may require updates to the data. If updating is required, these updates may be accomplished in one of two ways:
Once all the edits have been completed for an analytical team, and QC frequencies and univariates have been approved, notification is sent to all other analytical teams so that work can be coordinated in those areas.
Analytical groups at AHRQ work with Westat analysts to define the variables of interest for inclusion on the PUF and other key data deliveries. Variables are named according to standard naming conventions, and once the list is approved, descriptive specifications are written to define each variable and to provide detailed information for programming.
Specifications are written at two levels. The high-level specification is a descriptive specification intended to document the concept of the variable and provide high-level information regarding the variable construction requirements. The detailed-level specifications contain the details required to develop programming code for building the variables. Specifications are written and sent to AHRQ for approval. Once approval is received for the specification, program development can proceed for that variable.
Specifications guide programming development, and once programs have been written, code reviews compare newly developed code against specifications to identify problems in either code or specifications. This program development process includes a number of steps and checkpoints to ensure that all new programs meet all specification requirements:
This model is followed for the development of all new programs required for data delivery. For mature programs that are re-used in subsequent deliveries with only minor modifications, the process is appropriately streamlined to ensure both accuracy and efficiency on all programs.
Public Use File Deliveries
The principal files delivered during calendar year 2020 are listed below.
Ancillary File Deliveries
In addition to the principal data files delivered for public release each year, the project also produces a number of ancillary files for delivery to AHRQ. These include an extensive series of person and family-level weights, “raw” data files reflecting MEPS data at intermediate stages of capture and editing, and files generated at the end of each round or as needed to support analysis of both substantive and methodological topics. A comprehensive list of the files delivered during 2020 appears in Appendix A.
Medical Provider Component (MPC) Files
During each year’s processing cycle, Westat also creates files for the MPC contractor and, in turn, receives data files back from the MPC. As in prior years, Westat provided sample files for the MPC in three waves, with the first two waves delivered while HC data collection was still in progress. In preparing the sample files to be delivered in 2021 for MPC collection of data about 2020 health events, Westat again applied the program developed in 2014 for de-duplicating the sample of providers. This process, developed in consultation with AHRQ, was designed to reduce the number of duplicate providers reported from the household data collection.
Early in 2020, following completion of MPC data collection and processing for 2018 events, Westat received the files containing data collected in the MPC with linkages to matching events collected in the MPC with events collected in the HC. In processing at Westat, matched events from the MPC served as the primary source for imputing expenditure variables for the 2018 events. A similar file of prescribed medicines was also delivered to support matching and imputation of expenditures for the prescribed medicines at AHRQ. Timely and well-coordinated data handoffs between Westat and the MPC are critical to the timely delivery of the full year expenditure files. With each additional year of interaction and cooperation, the handoffs between the MPC and HC have gone more and more smoothly.
Data collection period | RU-level sample size |
---|---|
January-June 1996 | 10,799 |
Panel 1 Round 1 | 10,799 |
July-December 1996 | 9,485 |
Panel 1 Round 2 | 9,485 |
January-June 1997 | 15,689 |
Panel 1 Round 3 | 9,228 |
Panel 2 Round 1 | 6,461 |
July-December 1997 | 14,657 |
Panel 1 Round 4 | 9,019 |
Panel 2 Round 2 | 5,638 |
January-June 1998 | 19,269 |
Panel 1 Round 5 | 8,477 |
Panel 2 Round 3 | 5,382 |
Panel 3 Round 1 | 5,410 |
July-December 1998 | 9,871 |
Panel 2 Round 4 | 5,290 |
Panel 3 Round 2 | 4,581 |
January-June 1999 | 17,612 |
Panel 2 Round 5 | 5,127 |
Panel 3 Round 3 | 5,382 |
Panel 4 Round 1 | 7,103 |
July-December 1999 | 10,161 |
Panel 3 Round 4 | 4,243 |
Panel 4 Round 2 | 5,918 |
January-June 2000 | 15,447 |
Panel 3 Round 5 | 4,183 |
Panel 4 Round 3 | 5,731 |
Panel 5 Round 1 | 5,533 |
July-December 2000 | 10,222 |
Panel 4 Round 4 | 5,567 |
Panel 5 Round 2 | 4,655 |
January-June 2001 | 21,069 |
Panel 4 Round 5 | 5,547 |
Panel 5 Round 3 | 4,496 |
Panel 6 Round 1 | 11,026 |
July-December 2001 | 13,777 |
Panel 5 Round 4 | 4,426 |
Panel 6 Round 2 | 9,351 |
January-June 2002 | 21,915 |
Panel 5 Round 5 | 4,393 |
Panel 6 Round 3 | 9,183 |
Panel 7 Round 1 | 8,339 |
July-December 2002 | 15,968 |
Panel 6 Round 4 | 8,977 |
Panel 7 Round 2 | 6,991 |
January-June 2003 | 24,315 |
Panel 6 Round 5 | 8,830 |
Panel 7 Round 3 | 6,779 |
Panel 8 Round 1 | 8,706 |
July-December 2003 | 13,814 |
Panel 7, Round 4 | 6,655 |
Panel 8, Round 2 | 7,159 |
January-June 2004 | 22,552 |
Panel 7 Round 5 | 6,578 |
Panel 8 Round 3 | 7,035 |
Panel 9 Round 1 | 8,939 |
July-December 2004 | 14,068 |
Panel 8, Round 4 | 6,878 |
Panel 9, Round 2 | 7,190 |
January-June 2005 | 22,548 |
Panel 8 Round 5 | 6,795 |
Panel 9 Round 3 | 7,005 |
Panel 10 Round 1 | 8,748 |
July-December 2005 | 13,991 |
Panel 9, Round 4 | 6,843 |
Panel 10, Round 2 | 7,148 |
January-June 2006 | 23,278 |
Panel 9 Round 5 | 6,703 |
Panel 10 Round 3 | 6,921 |
Panel 11 Round 1 | 9,654 |
July-December 2006 | 14,280 |
Panel 10 Round 4 | 6,708 |
Panel 11 Round 2 | 7,572 |
January-June 2007 | 21,326 |
Panel 10 Round 5 | 6,596 |
Panel 11 Round 3 | 7,263 |
Panel 12 Round 1 | 7,467 |
July-December 2007 | 12,906 |
Panel 11 Round 4 | 7,005 |
Panel 12 Round 2 | 5,901 |
January-June 2008 | 22,414 |
Panel 11 Round 5 | 6,895 |
Panel 12 Round 3 | 5,580 |
Panel 13 Round 1 | 9,939 |
July-December 2008 | 13,384 |
Panel 12 Round 4 | 5,376 |
Panel 13 Round 2 | 8,008 |
January-June 2009 | 22,960 |
Panel 12 Round 5 | 5,261 |
Panel 13 Round 3 | 7,800 |
Panel 14 Round 1 | 9,899 |
July-December 2009 | 15,339 |
Panel 13 Round 4 | 7,670 |
Panel 14 Round 2 | 7,669 |
January-June 2010 | 23,770 |
Panel 13 Round 5 | 7,576 |
Panel 14 Round 3 | 7,226 |
Panel 15 Round 1 | 8,968 |
July-December 2010 | 13,785 |
Panel 14 Round 4 | 6,974 |
Panel 15 Round 2 | 6,811 |
January-June 2011 | 23,693 |
Panel 14 Round 5 | 6,845 |
Panel 15 Round 3 | 6,431 |
Panel 16 Round 1 | 10,417 |
July-December 2011 | 14,802 |
Panel 15 Round 4 | 6,254 |
Panel 16 Round 2 | 8,548 |
January-June 2012 | 24,247 |
Panel 15 Round 5 | 6,156 |
Panel 16 Round 3 | 8,160 |
Panel 17 Round 1 | 9,931 |
July-December 2012 | 16,161 |
Panel 16 Round 4 | 8,048 |
Panel 17 Round 2 | 8,113 |
January-June 2013 | 25,788 |
Panel 16 Round 5 | 7,969 |
Panel 17 Round 3 | 7,869 |
Panel 18 Round 1 | 9,950 |
July-December 2013 | 15,347 |
Panel 17 Round 4 | 7,656 |
Panel 18 Round 2 | 7,691 |
January-June 2014 | 24,857 |
Panel 17 Round 5 | 7,485 |
Panel 18 Round 3 | 7,402 |
Panel 19 Round 1 | 9,970 |
July-December 2014 | 14,665 |
Panel 18 Round 4 | 7,203 |
Panel 19 Round 2 | 7,462 |
January-June 2015 | 25,185 |
Panel 18 Round 5 | 7,163 |
Panel 19 Round 3 | 7,168 |
Panel 20 Round 1 | 10,854 |
July-December 2015 | 15,247 |
Panel 19 Round 4 | 6,946 |
Panel 20 Round 2 | 8,301 |
January-June 2016 | 24,694 |
Panel 19 Round 5 | 6,856 |
Panel 20 Round 3 | 7,987 |
Panel 21 Round 1 | 9,851 |
July-December 2016 | 15,390 |
Panel 20 Round 4 | 7,729 |
Panel 21 Round 2 | 7,661 |
January-June 2017 | 24,773 |
Panel 20 Round 5 | 7,611 |
Panel 21 Round 3 | 7,327 |
Panel 22 Round 1 | 9,835 |
July-December 2017 | 14,396 |
Panel 21 Round 4 | 7,025 |
Panel 22 Round 2 | 7,371 |
January-June 2018 | 23,768 |
Panel 21 Round 5 | 6,899 |
Panel 22 Round 3 | 7,023 |
Panel 23 Round 1 | 9,846 |
July-December 2018 | 14,123 |
Panel 22 Round 4 | 6,789 |
Panel 23 Round 2 | 7,334 |
January-June 2019 | 20,723 |
Panel 22 Round 5 | 6,624 |
Panel 23 Round 3 | 6,773 |
Panel 24 Round 1 | 7,326 |
July-December 2019 | 13,403 |
Panel 23 Round 4 | 6,569 |
Panel 24 Round 2 | 6,834 |
January-June 2020 | 19,213 |
Panel 23 Round 5 | 6,413 |
Panel 24 Round 3 | 6,382 |
Panel 25 Round 1 | 6,418 |
July-December 2020 | 15,633 |
Panel 23 Round 6 | 5,264 |
Panel 24 Round 4 | 5,574 |
Panel 25 Round 2 | 4,795 |
Panel/round | Original sample | Split cases (movers) | Student cases | Out-of-scope cases | Net sample | Completes | Average interviewer hours/ complete | Response rate (%) | |
---|---|---|---|---|---|---|---|---|---|
Panel 1 | Round 1 | 10,799 | 675 | 125 | 165 | 11,434 | 9,496 | 10.4 | 83.1 |
Round 2 | 9,485 | 310 | 74 | 101 | 9,768 | 9,239 | 8.7 | 94.6 | |
Round 3 | 9,228 | 250 | 28 | 78 | 9,428 | 9,031 | 8.6 | 95.8 | |
Round 4 | 9,019 | 261 | 33 | 89 | 9,224 | 8,487 | 8.5 | 92.0 | |
Round 5 | 8,477 | 80 | 5 | 66 | 8,496 | 8,369 | 6.5 | 98.5 | |
Panel 2 | Round 1 | 6,461 | 431 | 71 | 151 | 6,812 | 5,660 | 12.9 | 83.1 |
Round 2 | 5,638 | 204 | 27 | 54 | 5,815 | 5,395 | 9.1 | 92.8 | |
Round 3 | 5,382 | 166 | 15 | 52 | 5,511 | 5,296 | 8.5 | 96.1 | |
Round 4 | 5,290 | 105 | 27 | 65 | 5,357 | 5,129 | 8.3 | 95.7 | |
Round 5 | 5,127 | 38 | 2 | 56 | 5,111 | 5,049 | 6.7 | 98.8 | |
Panel 3 | Round 1 | 5,410 | 349 | 44 | 200 | 5,603 | 4,599 | 12.7 | 82.1 |
Round 2 | 4,581 | 106 | 25 | 39 | 4,673 | 4,388 | 8.3 | 93.9 | |
Round 3 | 4,382 | 102 | 4 | 42 | 4,446 | 4,249 | 7.3 | 95.5 | |
Round 4 | 4,243 | 86 | 17 | 33 | 4,313 | 4,184 | 6.7 | 97.0 | |
Round 5 | 4,183 | 23 | 1 | 26 | 4,181 | 4,114 | 5.6 | 98.4 | |
Panel 4 | Round 1 | 7,103 | 371 | 64 | 134 | 7,404 | 5,948 | 10.9 | 80.3 |
Round 2 | 5,918 | 197 | 47 | 40 | 6,122 | 5,737 | 7.2 | 93.7 | |
Round 3 | 5,731 | 145 | 10 | 39 | 5,847 | 5,574 | 6.9 | 95.3 | |
Round 4 | 5,567 | 133 | 35 | 39 | 5,696 | 5,540 | 6.8 | 97.3 | |
Round 5 | 5,547 | 52 | 4 | 47 | 5,556 | 5,500 | 6.0 | 99.0 | |
Panel 5 | Round 1 | 5,533 | 258 | 62 | 103 | 5,750 | 4,670 | 11.1 | 81.2 |
Round 2 | 4,655 | 119 | 27 | 27 | 4,774 | 4,510 | 7.7 | 94.5 | |
Round 3 | 4,496 | 108 | 17 | 24 | 4,597 | 4,437 | 7.2 | 96.5 | |
Round 4 | 4,426 | 117 | 20 | 41 | 4,522 | 4,396 | 7.0 | 97.2 | |
Round 5 | 4,393 | 47 | 12 | 32 | 4,420 | 4,357 | 5.5 | 98.6 | |
Panel 6 | Round 1 | 11,026 | 595 | 135 | 200 | 11,556 | 9,382 | 10.8 | 81.2 |
Round 2 | 9,351 | 316 | 49 | 50 | 9,666 | 9,222 | 7.2 | 95.4 | |
Round 3 | 9,183 | 215 | 23 | 41 | 9,380 | 9,001 | 6.5 | 96.0 | |
Round 4 | 8,977 | 174 | 32 | 66 | 9,117 | 8,843 | 6.6 | 97.0 | |
Round 5 | 8,830 | 94 | 14 | 46 | 8,892 | 8,781 | 5.6 | 98.8 | |
Panel 7 | Round 1 | 8,339 | 417 | 76 | 122 | 8,710 | 7,008 | 10.0 | 80.5 |
Round 2 | 6,991 | 190 | 40 | 24 | 7,197 | 6,802 | 7.2 | 94.5 | |
Round 3 | 6,779 | 169 | 21 | 32 | 6,937 | 6,673 | 6.5 | 96.2 | |
Round 4 | 6,655 | 133 | 17 | 34 | 6,771 | 6,593 | 7.0 | 97.4 | |
Round 5 | 6,578 | 79 | 11 | 39 | 6,629 | 6,529 | 5.7 | 98.5 | |
Panel 8 | Round 1 | 8,706 | 441 | 73 | 175 | 9,045 | 7,177 | 10.0 | 79.3 |
Round 2 | 7,159 | 218 | 52 | 36 | 7,393 | 7,049 | 7.2 | 95.4 | |
Round 3 | 7,035 | 150 | 13 | 33 | 7,165 | 6,892 | 6.5 | 96.2 | |
Round 4 | 6,878 | 149 | 27 | 53 | 7,001 | 6,799 | 7.3 | 97.1 | |
Round 5 | 6,795 | 71 | 8 | 41 | 6,833 | 6,726 | 6.0 | 98.4 | |
Panel 9 | Round 1 | 8,939 | 417 | 73 | 179 | 9,250 | 7,205 | 10.5 | 77.9 |
Round 2 | 7,190 | 237 | 40 | 40 | 7,427 | 7,027 | 7.7 | 94.6 | |
Round 3 | 7,005 | 189 | 24 | 31 | 7,187 | 6,861 | 7.1 | 95.5 | |
Round 4 | 6,843 | 142 | 23 | 44 | 6,964 | 6,716 | 7.4 | 96.5 | |
Round 5 | 6,703 | 60 | 8 | 43 | 6,728 | 6,627 | 6.1 | 98.5 | |
Panel 10 | Round 1 | 8,748 | 430 | 77 | 169 | 9,086 | 7,175 | 11.0 | 79.0 |
Round 2 | 7,148 | 219 | 36 | 22 | 7,381 | 6,940 | 7.8 | 94.0 | |
Round 3 | 6,921 | 156 | 10 | 31 | 7,056 | 6,727 | 6.8 | 95.3 | |
Round 4 | 6,708 | 155 | 13 | 34 | 6,842 | 6,590 | 7.3 | 96.3 | |
Round 5 | 6,596 | 55 | 9 | 38 | 6,622 | 6,461 | 6.2 | 97.6 | |
Panel 11 | Round 1 | 9,654 | 399 | 81 | 162 | 9,972 | 7,585 | 11.5 | 76.1 |
Round 2 | 7,572 | 244 | 42 | 24 | 7,834 | 7,276 | 7.8 | 92.9 | |
Round 3 | 7,263 | 170 | 15 | 25 | 7,423 | 7,007 | 6.9 | 94.4 | |
Round 4 | 7,005 | 139 | 14 | 36 | 7,122 | 6,898 | 7.2 | 96.9 | |
Round 5 | 6,895 | 51 | 7 | 44 | 6,905 | 6,781 | 5.5 | 98.2 | |
Panel 12 | Round 1 | 7,467 | 331 | 86 | 172 | 7,712 | 5,901 | 14.2 | 76.5 |
Round 2 | 5,901 | 157 | 27 | 27 | 6,058 | 5,584 | 9.1 | 92.2 | |
Round 3 | 5,580 | 105 | 13 | 12 | 5,686 | 5,383 | 8.1 | 94.7 | |
Round 4 | 5,376 | 102 | 12 | 16 | 5,474 | 5,267 | 8.8 | 96.2 | |
Round 5 | 5,261 | 50 | 8 | 21 | 5,298 | 5,182 | 6.4 | 97.8 | |
Panel 13 | Round 1 | 9,939 | 502 | 97 | 213 | 10,325 | 8,017 | 12.2 | 77.6 |
Round 2 | 8,008 | 220 | 47 | 23 | 8,252 | 7,809 | 9.0 | 94.6 | |
Round 3 | 7,802 | 204 | 14 | 38 | 7,982 | 7,684 | 7.2 | 96.2 | |
Round 4 | 7,670 | 162 | 17 | 40 | 7,809 | 7,576 | 7.5 | 97.0 | |
Round 5 | 7,576 | 70 | 15 | 38 | 7,623 | 7,461 | 6.1 | 97.9 | |
Panel 14 | Round 1 | 9,899 | 394 | 74 | 140 | 10,227 | 7,650 | 12.3 | 74.8 |
Round 2 | 7,669 | 212 | 29 | 27 | 7,883 | 7,239 | 8.3 | 91.8 | |
Round 3 | 7,226 | 144 | 23 | 34 | 7,359 | 6,980 | 7.3 | 94.9 | |
Round 4 | 6,974 | 112 | 23 | 30 | 7,079 | 6,853 | 7.7 | 96.8 | |
Round 5 | 6,845 | 55 | 9 | 30 | 6,879 | 6,761 | 6.2 | 98.3 | |
Panel 15 | Round 1 | 8,968 | 374 | 73 | 157 | 9,258 | 6,802 | 13.2 | 73.5 |
Round 2 | 6,811 | 171 | 19 | 21 | 6,980 | 6,435 | 8.9 | 92.2 | |
Round 3 | 6,431 | 134 | 23 | 22 | 6,566 | 6,261 | 7.2 | 95.4 | |
Round 4 | 6,254 | 116 | 15 | 26 | 6,359 | 6,165 | 7.8 | 97.0 | |
Round 5 | 6,156 | 50 | 5 | 19 | 6,192 | 6,078 | 6.0 | 98.2 | |
Panel 16 | Round 1 | 10,417 | 504 | 98 | 555 | 10,940 | 8,553 | 11.4 | 78.2 |
Round 2 | 8,353 | 248 | 40 | 32 | 8,821 | 8,351 | 7.6 | 94.7 | |
Round 3 | 8,160 | 223 | 19 | 27 | 8,375 | 8,236 | 6.4 | 96.1 | |
Round 4 | 8,048 | 151 | 16 | 13 | 8,390 | 8,162 | 6.6 | 97.3 | |
Round 5 | 7,969 | 66 | 13 | 25 | 8,198 | 7,998 | 5.5 | 97.6 | |
Panel 17 | Round 1 | 9,931 | 490 | 92 | 127 | 10,386 | 8,121 | 11.7 | 78.2 |
Round 2 | 8,113 | 230 | 35 | 19 | 8,359 | 7,874 | 7.9 | 94.2 | |
Round 3 | 7,869 | 180 | 15 | 15 | 8,049 | 7,663 | 6.3 | 95.2 | |
Round 4 | 7,656 | 199 | 19 | 30 | 7,844 | 7,494 | 7.4 | 95.5 | |
Round 5 | 7,485 | 87 | 10 | 23 | 7,559 | 7,445 | 6.1 | 98.5 | |
Panel 18 | Round 1 | 9,950 | 435 | 83 | 111 | 10,357 | 7,683 | 12.3 | 74.2 |
Round 2 | 7,691 | 264 | 32 | 16 | 7,971 | 7,402 | 9.2 | 92.9 | |
Round 3 | 7,402 | 235 | 21 | 22 | 7,635 | 7,213 | 7.6 | 94.5 | |
Round 4 | 7,203 | 189 | 14 | 22 | 7,384 | 7,172 | 7.5 | 97.1 | |
Round 5 | 7,163 | 94 | 12 | 15 | 7,254 | 7,138 | 6.2 | 98.4 | |
Panel 19 | Round 1 | 9,970 | 492 | 70 | 115 | 10,417 | 7,475 | 13.5 | 71.8 |
Round 2 | 7,460 | 222 | 23 | 24 | 7,681 | 7,188 | 8.4 | 93.6 | |
Round 3 | 7,168 | 187 | 12 | 17 | 7,350 | 6,962 | 7.0 | 94.7 | |
Round 4 | 6,946 | 146 | 20 | 23 | 7,089 | 6,858 | 7.4 | 96.7 | |
Round 5 | 6,856 | 75 | 7 | 24 | 6,914 | 6,794 | 5.9 | 98.3 | |
Panel 20 | Round 1 | 10,854 | 496 | 85 | 117 | 11,318 | 8,318 | 12.5 | 73.5 |
Round 2 | 8,301 | 243 | 39 | 22 | 8,561 | 7,998 | 8.3 | 93.4 | |
Round 3 | 7,987 | 173 | 17 | 26 | 8,151 | 7,753 | 6.8 | 95.1 | |
Round 4 | 7,729 | 161 | 19 | 31 | 7,878 | 7,622 | 7.2 | 96.8 | |
Round 5 | 7,611 | 99 | 13 | 23 | 7,700 | 7,421 | 6.0 | 96.4 | |
Panel 21 | Round 1 | 9,851 | 462 | 92 | 89 | 10,316 | 7,674 | 12.6 | 74.4 |
Round 2 | 7,661 | 207 | 32 | 17 | 7,883 | 7,327 | 8.5 | 93.0 | |
Round 3 | 7,327 | 166 | 14 | 19 | 7,488 | 7,043 | 7.2 | 94.1 | |
Round 4 | 7,025 | 119 | 14 | 20 | 7,138 | 6,907 | 7.0 | 96.8 | |
Round 5 | 6,914 | 42 | 8 | 340 | 6,930 | 6,778 | 5.9 | 97.8 | |
Panel 22 | Round 1 | 9,835 | 352 | 68 | 86 | 10,169 | 7,381 | 12.8 | 72.7 |
Round 2 | 7,371 | 166 | 19 | 11 | 7,545 | 7,039 | 8.5 | 93.3 | |
Round 3 | 7,071 | 100 | 12 | 19 | 7,164 | 6,808 | 6.7 | 95.0 | |
Round 4 | 6,815 | 91 | 13 | 18 | 6,901 | 6,672 | 6.8 | 97.0 | |
Round 5 | 6,670 | 35 | 7 | 12 | 6,700 | 6,584 | 5.3 | 98.3 | |
Panel 23 | Round 1 | 9,960 | 1,931 | 46 | 110 | 10,089 | 7,351 | 12.5 | 72.9 |
Round 2 | 7,387 | 106 | 14 | 15 | 7,492 | 6,960 | 8.2 | 92.9 | |
Round 3 | 6,987 | 102 | 11 | 18 | 7,082 | 6,703 | 6.1 | 94.6 | |
Round 4 | 6,704 | 74 | 10 | 12 | 6,776 | 6,522 | 6.6 | 96.2 | |
Round 5 | 6,503 | 34 | 4 | 5 | 6,536 | 6,383 | 5.3 | 97.7 | |
Round 6 | 6,398 | 19 | 10 | 18 | 6,480 | 5,120 | 4.8 | 79.0 | |
Panel 24 | Round 1 | 9,976 | 153 | 43 | 82 | 10,090 | 7,186 | 11.8 | 71.2 |
Round 2 | 7,211 | 98 | 19 | 5 | 7,323 | 6,777 | 7.9 | 92.5 | |
Round 3 | 6,812 | 76 | 9 | 7 | 6,890 | 6,289 | 6.0 | 91.3 | |
Round 4 | 6,335 | 44 | 4 | 13 | 6,370 | 5,446 | 5.1 | 85.5 | |
Round 5 | |||||||||
Panel 25 | Round 1 | 10,008 | 184 | 38 | 78 | 10,152 | 6,265 | 10.8 | 61.7 |
Round 2 | 5,907 | 49 | 14 | 12 | 5,958 | 4,677 | 5.5 | 78.5 | |
Round 3 | |||||||||
Round 4 | |||||||||
Round 5 |
* Figures in the table are weighted to reflect results of the interim nonresponse subsampling procedure implemented in the first round of Panel 16.
Round 1 | Round 2 | Round 3 | Round 4 | Round 5 | Round 6 | |
---|---|---|---|---|---|---|
2010 | ||||||
Panel 15 | 73.5 | 92.2 | ||||
Panel 14 | 94.9 | 96.8 | ||||
Panel 13 | 97.9 | |||||
2011 | ||||||
Panel 16 | 78.2 | 94.8 | ||||
Panel 15 | 95.4 | 97.0 | ||||
Panel 14 | 98.3 | |||||
2012 | ||||||
Panel 17 | 78.2 | 94.2 | ||||
Panel 16 | 96.1 | 97.3 | ||||
Panel 15 | 98.2 | |||||
2013 | ||||||
Panel 18 | 74.2 | 92.9 | ||||
Panel 17 | 95.2 | 95.5 | ||||
Panel 16 | 97.6 | |||||
2014 | ||||||
Panel 19 | 71.8 | 93.6 | ||||
Panel 18 | 94.5 | 97.1 | ||||
Panel 17 | 98.5 | |||||
2015 | ||||||
Panel 20 | 73.5 | 93.4 | ||||
Panel 19 | 94.7 | 96.7 | ||||
Panel 18 | 98.4 | |||||
2016 | ||||||
Panel 21 | 74.4 | 93.0 | ||||
Panel 20 | 95.1 | 96.8 | ||||
Panel 19 | 98.3 | |||||
2017 | ||||||
Panel 22 | 72.6 | 93.3 | ||||
Panel 21 | 94.1 | 96.8 | ||||
Panel 20 | 96.4 | |||||
2018 | ||||||
Panel 23 | 72.9 | |||||
Panel 22 | 92.9 | 96.7 | ||||
Panel 21 | 95.0 | 97.8 | ||||
2019 | ||||||
Panel 24 | 71.2 | 92.5 | ||||
Panel 23 | 94.6 | 96.2 | ||||
Panel 22 | 98.3 | |||||
2020 | ||||||
Panel 25 | 61.7 | 78.5 | ||||
Panel 24 | 91.3 | 85.5 | ||||
Panel 23 | 97.7 | 79.0 |
2012 P17R1 |
2013 P18R1 |
2014 P19R1 |
2015 P20R1 |
2016 P21R1 |
2017 P22R1 |
2018 P23R1 |
2019 P24R1 |
2020 P25R1 |
|
---|---|---|---|---|---|---|---|---|---|
Total Sample | 10,513 | 10,468 | 10,532 | 11,435 | 10,405 | 10,255 | 10,199 | 10,172 | 10,230 |
Out of scope (%) | 1.2 | 1.1 | 1.1 | 1.0 | 0.9 | 0.8 | 1.1 | 0.8 | 0.8 |
Complete (%) | 78.2 | 74.2 | 71.8 | 73.5 | 74.4 | 72.6 | 72.1 | 70.6 | 61.2 |
Nonresponse (%) | 21.8 | 25.8 | 28.2 | 26.5 | 25.6 | 27.4 | 26.9 | 28.6 | 38.0 |
Refusal (%) | 17.1 | 20.1 | 22.4 | 21.0 | 20.2 | 21.8 | 22.1 | 24.0 | 28.7 |
Not located (%) | 3.7 | 4.3 | 4.2 | 4.3 | 3.7 | 3.9 | 3.1 | 3.1 | 3.2 |
Other nonresponse (%) | 1.0 | 1.4 | 1.6 | 1.2 | 1.7 | 1.7 | 1.7 | 1.5 | 6.1 |
2012 P17R1 |
2013 P18R1 |
2014 P19R1 |
2015 P20R1 |
2016 P21R1 |
2017 P22R1 |
2018 P23R1 |
2019 P24R1 |
2020 P25R1 |
|
---|---|---|---|---|---|---|---|---|---|
Original NHIS sample (N) | 9,931 | 9,951 | 9,970 | 10,854 | 9,851 | 9,835 | 9,839 | 9,864 | 9,866 |
Percent complete in NHIS | 80.9 | 78.1 | 81.9 | 80.6 | 77.6 | 81.0 | 80.4 | 84.2 | 89.3 |
Percent partial complete in NHIS | 19.1 | 21.9 | 18.1 | 19.4 | 22.4 | 19.0 | 19.6 | 15.8 | 10.7 |
MEPS Round 1 response rate | |||||||||
Percent complete for NHIS completes | 80.7 | 76.9 | 74.5 | 75.9 | 77.3 | 75.4 | 75.4 | 73.5 | 63.5 |
Percent complete for NHIS partial completes | 68.2 | 64.5 | 58.9 | 63.1 | 64.8 | 62.0 | 63.6 | 60.3 | 46.8 |
Note: Figures shown are based on original NHIS sample and exclude reporting units added to the sample as “splits” and “students.”
Panel | Net sample (N) | Ever refused (%) | Converted (%) | Final refusal rate (%) | Final response rate (%) |
---|---|---|---|---|---|
Panel 15 | 9,258 | 29.4 | 26.6 | 21.0 | 73.5 |
Panel 16 | 10,940 | 26.3 | 30.9 | 17.6 | 78.2 |
Panel 17 | 10,386 | 25.3 | 30.2 | 17.2 | 78.2 |
Panel 18 | 10,357 | 25.5 | 25.0 | 18.1 | 74.2 |
Panel 19 | 10,418 | 30.1 | 23.3 | 22.4 | 71.8 |
Panel 20 | 11,318 | 30.1 | 29.2 | 21.0 | 73.5 |
Panel 21 | 10,316 | 29.1 | 29.0 | 20.2 | 74.4 |
Panel 22 | 10,169 | 30.1 | 27.6 | 21.8 | 72.6 |
Panel 23 | 10,089 | 31.3 | 25.6 | 22.4 | 72.9 |
Panel 24 | 10,090 | 32.6 | 23.4 | 24.2 | 71.2 |
Panel 25 | 10,152 | 34.8 | 12.3 | 28.9 | 61.7 |
Panel | Total sample (N) | Ever traced (%) | Not located (%) |
---|---|---|---|
Panel 15 | 9,415 | 16.7 | 4.1 |
Panel 16 | 11,019 | 18.2 | 3.0 |
Panel 17 | 10,513 | 18.7 | 3.6 |
Panel 18 | 10,468 | 16.0 | 4.3 |
Panel 19 | 10,532 | 19.5 | 4.1 |
Panel 20 | 11,435 | 14.0 | 4.3 |
Panel 21 | 10,405 | 12.8 | 3.7 |
Panel 22 | 10,228 | 13.0 | 3.9 |
Panel 23 | 10,199 | 12.7 | 3.0 |
Panel 24 | 10,172 | 12.6 | 3.0 |
Panel 25 | 10,230 | 11.7 | 3.2 |
Round | Panel 15 | Panel 16 | Panel 17 | Panel 18 | Panel 19 | Panel 20 | Panel 21 | Panel 22 | Panel 23 | Panel 24 | Panel 25 |
---|---|---|---|---|---|---|---|---|---|---|---|
Round 1 | 74.7 | 74.0 | 67.8 | 78.0 | 85.5 | 76.4 | 75.5 | 79.9 | 78.1 | 79.5 | 89.0 |
Round 2 | 87.2 | 88.1 | 90.2 | 102.9 | 92.3 | 86.3 | 85.3 | 88.8 | 88.2 | 87.0 | 89.7 |
Round 3 | 86.4 | 87.2 | 94.3 | 103.1 | 94.5 | 89.7 | 93.4 | 93.0 | 92.6 | 98.5 | |
Round 4 | 80.2 | 85.9 | 99.6 | 89.0 | 84.6 | 80.5 | 82.7 | 84.3 | 86.8 | 86.2 | |
Round 5 | 77.6 | 85.4 | 92.2 | 87.4 | 84.1 | 85.3 | 77.4 | 78.8 | 78.7 | ||
Round 6 | 88.4 |
Contact type | Panel 20, Round 1 | Panel 21, Round 1 | Panel 22, Round 1 | Panel 23, Round 1 | Panel 24, Round 1 | Panel 25, Round 1 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
All RUs | Complete | Partial | All RUs | Complete | Partial | All RUs | Complete | Partial | All RUs | Complete | Partial | All RUs | Complete | Partial | All RUs | Complete | Partial | |
N | 10,854 | 8,751 | 2,103 | 9,851 | 7,645 | 2,206 | 9,835 | 7,963 | 1,872 | 9,839 | 7,913 | 1,050 | 9,864 | 8,306 | 1,558 | 9,866 | 8,814 | 1,052 |
% of all RUs | 100 | 81.0 | 19.0 | 100 | 77.6 | 22.4 | 100 | 81.0 | 19.0 | 100 | 80.4 | 10.6 | 100 | 84.2 | 15.8 | 100 | 89.3 | 10.7 |
In-person | 7.2 | 6.9 | 8.5 | 7.0 | 6.9 | 8.3 | 6.3 | 6.1 | 7.3 | 6.2 | 6.0 | 7.2 | 5.5 | 5.4 | 6.3 | 2.6 | 2.5 | 2.6 |
Telephone | 2.1 | 2.0 | 2.5 | 2.0 | 1.9 | 2.4 | 1.5 | 1.5 | 1.7 | 1.5 | 1.4 | 1.7 | 1.3 | 1.2 | 1.6 | 9.7 | 9.5 | 11.6 |
Total | 9.6 | 9.2 | 11.4 | 9.3 | 8.9 | 11.0 | 8.4 | 8.1 | 9.6 | 8.2 | 7.9 | 9.5 | 7.3 | 7.1 | 8.5 | 14.4 | 14.1 | 17.0 |
Panel/round | Authorization forms requested | Authorization forms signed | Signing rate (%) | |
---|---|---|---|---|
Panel 1 | Round 1 | 3,562 | 2,624 | 73.7 |
Round 2 | 19,874 | 14,145 | 71.2 | |
Round 3 | 17,722 | 12,062 | 68.1 | |
Round 4 | 17,133 | 10,542 | 61.5 | |
Round 5 | 12,544 | 6,763 | 53.9 | |
Panel 2 | Round 1 | 2,735 | 1,788 | 65.4 |
Round 2 | 13,461 | 9,433 | 70.1 | |
Round 3 | 11,901 | 7,537 | 63.3 | |
Round 4 | 11,164 | 6,485 | 58.1 | |
Round 5 | 8,104 | 4,244 | 52.4 | |
Panel 3 | Round 1 | 2,078 | 1,349 | 64.9 |
Round 2 | 10,335 | 6,463 | 62.5 | |
Round 3 | 8,716 | 4,797 | 55.0 | |
Round 4 | 8,761 | 4,246 | 48.5 | |
Round 5 | 6,913 | 2,911 | 42.1 | |
Panel 4 | Round 1 | 2,400 | 1,607 | 67.0 |
Round 2 | 12,711 | 8,434 | 66.4 | |
Round 3 | 11,078 | 6,642 | 60.0 | |
Round 4 | 11,047 | 6,888 | 62.4 | |
Round 5 | 8,684 | 5,096 | 58.7 | |
Panel 5 | Round 1 | 1,243 | 834 | 67.1 |
Round 2 | 14,008 | 9,618 | 68.7 | |
Round 3 | 12,869 | 8,301 | 64.5 | |
Round 4 | 13,464 | 9,170 | 68.1 | |
Round 5 | 10,888 | 7,025 | 64.5 | |
Panel 6 | Round 1 | 2,783 | 2,012 | 72.3 |
Round 2 | 29,861 | 22,872 | 76.6 | |
Round 3 | 26,068 | 18,219 | 69.9 | |
Round 4 | 27,146 | 20,082 | 74.0 | |
Round 5 | 21,022 | 14,581 | 69.4 | |
Panel 7 | Round 1 | 2,298 | 1,723 | 75.0 |
Round 2 | 22,302 | 17,557 | 78.7 | |
Round 3 | 19,312 | 13,896 | 72.0 | |
Round 4 | 16,934 | 13,725 | 81.1 | |
Round 5 | 14,577 | 11,099 | 76.1 | |
Panel 8 | Round 1 | 2,287 | 1,773 | 77.5 |
Round 2 | 22,533 | 17,802 | 79.0 | |
Round 3 | 19,530 | 14,064 | 72.0 | |
Round 4 | 19,718 | 14,599 | 74.0 | |
Round 5 | 15,856 | 11,106 | 70.0 | |
Panel 9 | Round 1 | 2,253 | 1,681 | 74.6 |
Round 2 | 22,668 | 17,522 | 77.3 | |
Round 3 | 19,601 | 13,672 | 69.8 | |
Round 4 | 20,147 | 14,527 | 72.1 | |
Round 5 | 15,963 | 10,720 | 67.2 | |
Panel 10 | Round 1 | 2,068 | 1,443 | 69.8 |
Round 2 | 22,582 | 17,090 | 75.7 | |
Round 3 | 18,967 | 13,396 | 70.6 | |
Round 4 | 19,087 | 13,296 | 69.7 | |
Round 5 | 15,787 | 10,476 | 66.4 | |
Panel 11 | Round 1 | 2,154 | 1,498 | 69.5 |
Round 2 | 23,957 | 17,742 | 74.1 | |
Round 3 | 20,756 | 13,400 | 64.6 | |
Round 4 | 21,260 | 14,808 | 69.7 | |
Round 5 | 16,793 | 11,482 | 68.4 | |
Panel 12 | Round 1 | 1,695 | 1,066 | 62.9 |
Round 2 | 17,787 | 12,524 | 70.4 | |
Round 3 | 15,291 | 10,006 | 65.4 | |
Round 4 | 15,692 | 10,717 | 68.3 | |
Round 5 | 12,780 | 8,367 | 65.5 | |
Panel 13 | Round 1 | 2,217 | 1,603 | 72.3 |
Round 2 | 24,357 | 18,566 | 76.2 | |
Round 3 | 21,058 | 14,826 | 70.4 | |
Round 4 | 21,673 | 15,632 | 72.1 | |
Round 5 | 17,158 | 11,779 | 68.7 | |
Panel 14 | Round 1 | 2,128 | 1,498 | 70.4 |
Round 2 | 23,138 | 17,739 | 76.7 | |
Round 3 | 19,024 | 13,673 | 71.9 | |
Round 4 | 18,532 | 12,824 | 69.2 | |
Round 5 | 15,444 | 10,201 | 66.1 | |
Panel 15 | Round 1 | 1,680 | 1,136 | 67.6 |
Round 2 | 18,506 | 13,628 | 73.6 | |
Round 3 | 16,686 | 11,652 | 69.8 | |
Round 4 | 16,260 | 11,139 | 68.5 | |
Round 5 | 13,443 | 8,420 | 62.6 | |
Panel 16 | Round 1 | 1,811 | 1,223 | 67.5 |
Round 2 | 23,718 | 17,566 | 74.1 | |
Round 3 | 21,780 | 14,828 | 68.1 | |
Round 4 | 21,537 | 16,329 | 75.8 | |
Round 5 | 16,688 | 12,028 | 72.1 | |
Panel 17 | Round 1 | 1,655 | 1,117 | 67.5 |
Round 2 | 21,749 | 17,694 | 81.4 | |
Round 3 | 19,292 | 15,125 | 78.4 | |
Round 4 | 20,086 | 15,691 | 78.1 | |
Round 5 | 15,064 | 11,873 | 78.8 | |
Panel 18 | Round 1 | 1,677 | 1,266 | 75.5 |
Round 2 | 22,714 | 18,043 | 79.4 | |
Round 3 | 20,728 | 15,827 | 76.4 | |
Round 4 | 17,092 | 13,704 | 80.2 | |
Round 5 | 15,448 | 11,796 | 76.4 | |
Panel 19 | Round 1 | 2,189 | 1,480 | 67.6 |
Round 2 | 22,671 | 17,190 | 75.8 | |
Round 3 | 20,582 | 14,534 | 70.6 | |
Round 4 | 17,102 | 13,254 | 77.5 | |
Round 5 | 15,330 | 11,425 | 74.5 | |
Panel 20 | Round 1 | 2,354 | 1,603 | 68.1 |
Round 2 | 25,334 | 18,479 | 72.9 | |
Round 3 | 22,851 | 15,862 | 69.4 | |
Round 4 | 18,234 | 14,026 | 76.9 | |
Round 5 | 16,274 | 12,100 | 74.4 | |
Panel 21 | Round 1 | 2,037 | 1,396 | 68.5 |
Round 2 | 22,984 | 17,295 | 75.2 | |
Round 3 | 20,802 | 14,898 | 71.6 | |
Round 4 | 16,487 | 13,110 | 79.5 | |
Panel 22 | Round 1 | 2,274 | 1,573 | 69.2 |
Round 2 | 22,913 | 17,530 | 76.5 | |
Round 3 | 26,436 | 19,496 | 73.7 | |
Round 4 | 23,249 | 18,097 | 77.8 | |
Round 5 | 17,171 | 12,168 | 70.9 | |
Panel 23 | Round 1 | 1,982 | 1,533 | 77.3 |
Round 2 | 29,576 | 21,850 | 73.9 | |
Round 3 | 23,365 | 14,475 | 62.4 | |
Round 4 | 19,220 | 13,483 | 70.2 | |
Round 5 | ||||
Panel 24 | Round 1 | 2,285 | 1,306 | 57.2 |
Round 2 | 24,755 | 15,865 | 64.1 | |
Round 3 | 22,657 | 11,522 | 50.9 | |
Round 4 | 14,612 | 7,716 | 52.8 | |
Panel 25 | Round 1 | 3,110 | 1,242 | 39.9 |
Round 2 | 15,259 | 7,292 | 47.8 |
Panel/round | Permission forms requested | Permission forms signed | Signing rate (%) | |
---|---|---|---|---|
Panel 1 | Round 3 | 19,913 | 14,468 | 72.7 |
Round 5 | 8,685 | 6,002 | 69.1 | |
Panel 2 | Round 3 | 12,241 | 8,694 | 71.0 |
Round 5 | 8,640 | 6,297 | 72.9 | |
Panel 3 | Round 3 | 9,016 | 5,929 | 65.8 |
Round 5 | 7,569 | 5,200 | 68.7 | |
Panel 4 | Round 3 | 11,856 | 8,280 | 69.8 |
Round 5 | 10,688 | 8,318 | 77.8 | |
Panel 5 | Round 3 | 9,248 | 6,852 | 74.1 |
Round 5 | 8,955 | 7,174 | 80.1 | |
Panel 6 | Round 3 | 19,305 | 15,313 | 79.3 |
Round 5 | 17,981 | 14,864 | 82.7 | |
Panel 7 | Round 3 | 14,456 | 11,611 | 80.3 |
Round 5 | 13,428 | 11,210 | 83.5 | |
Panel 8 | Round 3 | 14,391 | 11,533 | 80.1 |
Round 5 | 13,422 | 11,049 | 82.3 | |
Panel 9 | Round 3 | 14,334 | 11,189 | 78.1 |
Round 5 | 13,416 | 10,893 | 81.2 | |
Panel 10 | Round 3 | 13,928 | 10,706 | 76.9 |
Round 5 | 12,869 | 10,260 | 79.7 | |
Panel 11 | Round 3 | 14,937 | 11,328 | 75.8 |
Round 5 | 13,778 | 11,332 | 82.3 | |
Panel 12 | Round 3 | 10,840 | 8,242 | 76.0 |
Round 5 | 9,930 | 8,015 | 80.7 | |
Panel 13 | Round 3 | 15,379 | 12,165 | 79.1 |
Round 4 | 10,782 | 7,795 | 72.3 | |
Round 5 | 9,451 | 6,635 | 70.2 | |
Panel 14 | Round 2 | 11,841 | 9,151 | 77.3 |
Round 3 | 9,686 | 7,091 | 73.2 | |
Round 4 | 9,298 | 6,623 | 71.2 | |
Round 5 | 8,415 | 6,011 | 71.4 | |
Panel 15 | Round 2 | 9,698 | 7,092 | 73.1 |
Round 3 | 8,684 | 6,189 | 71.3 | |
Round 4 | 8,163 | 5,756 | 70.5 | |
Round 5 | 7,302 | 4,485 | 66.9 | |
Panel 16 | Round 2 | 12,093 | 8,892 | 73.5 |
Round 3 | 10,959 | 7,591 | 69.3 | |
Round 4 | 10,432 | 8,194 | 78.6 | |
Round 5 | 8,990 | 6,928 | 77.1 | |
Panel 17 | Round 2 | 14,181 | 12,567 | 88.6 |
Round 3 | 9,715 | 7,580 | 78.0 | |
Round 4 | 9,759 | 7,730 | 79.2 | |
Round 5 | 8,245 | 6,604 | 80.1 | |
Panel 18 | Round 2 | 10,977 | 8,755 | 79.8 |
Round 3 | 9,757 | 7,573 | 77.6 | |
Round 4 | 8,526 | 6,858 | 80.4 | |
Round 5 | 7,918 | 6,173 | 78.0 | |
Panel 19 | Round 2 | 10,749 | 8,261 | 76.9 |
Round 3 | 9,618 | 6,902 | 71.8 | |
Round 4 | 8,557 | 6,579 | 76.9 | |
Round 5 | 7,767 | 5,905 | 76.0 | |
Panel 20 | Round 2 | 12,074 | 8,796 | 72.9 |
Round 3 | 10,577 | 7,432 | 70.3 | |
Round 4 | 9,0994 | 6,945 | 76.3 | |
Round 5 | 8,312 | 6,339 | 76.3 | |
Panel 21 | Round 2 | 10,783 | 7,985 | 74.1 |
Round 3 | 9,540 | 6,847 | 71.8 | |
Round 4 | 8,172 | 6,387 | 78.2 | |
Round 5 | 6,684 | 5,336 | 79.8 | |
Panel 22 | Round 2 | 10,510 | 7,919 | 75.4 |
Round 3 | 8,053 | 5,953 | 73.9 | |
Round 4 | 7,284 | 5,670 | 77.8 | |
Round 5 | 5,726 | 71.1 | ||
Panel 23 | Round 2 | 8,834 | 6,514 | 73.8 |
Round 3 | 9,614 | 6,205 | 64.5 | |
Round 4 | 8,486 | 5,900 | 69.5 | |
Round 5 | ||||
Round 6 | ||||
Panel 24 | Round 2 | 10,265 | 6,676 | 65.0 |
Round 3 | 9,096 | 4,831 | 53.1 | |
Round 4 | 7,100 | 3,636 | 51.2 | |
Panel 25 | Round 2 | 6,783 | 3,180 | 46.9 |
Panel/round | SAQs requested | SAQs completed | SAQs refused | Other nonresponse | Response rate (%) | |
---|---|---|---|---|---|---|
Panel 1 | Round 2 | 16,577 | 9,910 | - | - | 59.8 |
Round 3 | 6,032 | 1,469 | 840 | 3,723 | 24.3 | |
Combined, 1996 | 16,577 | 11,379 | - | - | 68.6 | |
Panel 4* | Round 4 | 13,936 | 12,265 | 288 | 1,367 | 87.9 |
Round 5 | 1,683 | 947 | 314 | 422 | 56.3 | |
Combined, 2000 | 13,936 | 13,212 | - | - | 94.8 | |
Panel 5* | Round 2 | 11,239 | 9,833 | 191 | 1,213 | 86.9 |
Round 3 | 1,314 | 717 | 180 | 417 | 54.6 | |
Combined, 2000 | 11,239 | 10,550 | - | - | 93.9 | |
Round 4 | 7,812 | 6,790 | 198 | 824 | 86.9 | |
Round 5 | 1,022 | 483 | 182 | 357 | 47.3 | |
Combined, 2001 | 7,812 | 7,273 | 380 | 1,181 | 93.1 | |
Panel 6 | Round 2 | 16,577 | 14,233 | 412 | 1,932 | 85.9 |
Round 3 | 2,143 | 1,213 | 230 | 700 | 56.6 | |
Combined, 2001 | 16,577 | 15,446 | 642 | 2,632 | 93.2 | |
Round 4 | 15,687 | 13,898 | 362 | 1,427 | 88.6 | |
Round 5 | 1,852 | 967 | 377 | 508 | 52.2 | |
Combined, 2002 | 15,687 | 14,865 | 739 | 1,935 | 94.8 | |
Panel 7 | Round 2 | 12,093 | 10,478 | 196 | 1,419 | 86.6 |
Round 3 | 1,559 | 894 | 206 | 459 | 57.3 | |
Combined, 2002 | 12,093 | 11,372 | 402 | 1,878 | 94.0 | |
Round 4 | 11,703 | 10,125 | 285 | 1,292 | 86.5 | |
Round 5 | 1,493 | 786 | 273 | 434 | 52.7 | |
Combined, 2003 | 11,703 | 10,911 | 558 | 1,726 | 93.2 | |
Panel 8 | Round 2 | 12,533 | 10,765 | 203 | 1,565 | 85.9 |
Round 3 | 1,568 | 846 | 234 | 488 | 54.0 | |
Combined, 2003 | 12,533 | 11,611 | 437 | 2,053 | 92.6 | |
Round 4 | 11,996 | 10,534 | 357 | 1,105 | 87.8 | |
Round 5 | 1,400 | 675 | 344 | 381 | 48.2 | |
Combined, 2004 | 11,996 | 11,209 | 701 | 1,486 | 93.4 | |
Panel 9 | Round 2 | 12,541 | 10,631 | 381 | 1,529 | 84.8 |
Round 3 | 1,670 | 886 | 287 | 496 | 53.1 | |
Combined, 2004 | 12,541 | 11,517 | 668 | 2,025 | 91.9 | |
Round 4 | 11,913 | 10,357 | 379 | 1,177 | 86.9 | |
Round 5 | 1,478 | 751 | 324 | 403 | 50.8 | |
Combined, 2005 | 11,913 | 11,108 | 703 | 1,580 | 93.2 | |
Panel 10 | Round 2 | 12,360 | 10,503 | 391 | 1,466 | 85.0 |
Round 3 | 1,626 | 787 | 280 | 559 | 48.4 | |
Combined, 2005 | 12,360 | 11,290 | 671 | 2025 | 91.3 | |
Round 4 | 11,726 | 10,081 | 415 | 1,230 | 86.0 | |
Round 5 | 1,516 | 696 | 417 | 403 | 45.9 | |
Combined, 2006 | 11,726 | 10,777 | 832 | 1,633 | 91.9 | |
Panel 11 | Round 2 | 13,146 | 10,924 | 452 | 1,770 | 83.1 |
Round 3 | 1,908 | 948 | 349 | 611 | 49.7 | |
Combined, 2006 | 13,146 | 11,872 | 801 | 2,381 | 90.3 | |
Round 4 | 12,479 | 10,771 | 622 | 1,086 | 86.3 | |
Round 5 | 1,621 | 790 | 539 | 292 | 48.7 | |
Combined, 2007 | 12,479 | 11,561 | 1,161 | 1,378 | 92.6 | |
Panel 12 | Round 2 | 10,061 | 8,419 | 502 | 1,140 | 83.7 |
Round 3 | 1,460 | 711 | 402 | 347 | 48.7 | |
Combined, 2007 | 10,061 | 9,130 | 904 | 1,487 | 90.7 | |
Round 4 | 9,550 | 8,303 | 577 | 670 | 86.9 | |
Round 5 | 1,145 | 541 | 415 | 189 | 47.3 | |
Combined, 2008 | 9,550 | 8,844 | 992 | 859 | 92.6 | |
Panel 13 | Round 2 | 14,410 | 12,541 | 707 | 1,162 | 87.0 |
Round 3 | 1,630 | 829 | 439 | 362 | 50.9 | |
Combined, 2008 | 14,410 | 13,370 | 1,146 | 1,524 | 92.8 | |
Round 4 | 13,822 | 12,311 | 559 | 952 | 89.1 | |
Round 5 | 1,364 | 635 | 476 | 253 | 46.6 | |
Combined, 2009 | 13,822 | 12,946 | 1,705 | 1205 | 93.7 | |
Panel 14 | Round 2 | 13,335 | 11,528 | 616 | 1,191 | 86.5 |
Round 3 | 1,542 | 818 | 426 | 298 | 53.1 | |
Combined, 2009 | 13,335 | 12,346 | 1042 | 1,489 | 92.6 | |
Round 4 | 12,527 | 11,041 | 644 | 839 | 88.1 | |
Round 5 | 1,403 | 645 | 497 | 261 | 46.0 | |
Combined, 2010 | 12,527 | 11,686 | 1,141 | 1,100 | 93.3 | |
Panel 15 | Round 2 | 11,857 | 10,121 | 637 | 1,096 | 85.4 |
Round 3 | 1,491 | 725 | 425 | 341 | 48.6 | |
Combined, 2010 | 11,857 | 10,846 | 1,062 | 1,437 | 91.5 | |
Round 4 | 11,311 | 9,804 | 572 | 935 | 86.7 | |
Round 5 | 1,418 | 678 | 461 | 279 | 47.8 | |
Combined, 2011 | 11,311 | 10,482 | 1,033 | 1,214 | 92.6 | |
Panel 16 | Round 2 | 15,026 | 12,926 | 707 | 1393 | 86.0 |
Round 3 | 1,863 | 949 | 465 | 449 | 50.9 | |
Combined, 2011 | 15,026 | 13,875 | 1,172 | 728 | 92.3 | |
Round 4 | 13,620 | 12,415 | 582 | 623 | 91.2 | |
Round 5 | 1,112 | 516 | 442 | 154 | 46.4 | |
Combined, 2012 | 13,620 | 12,931 | 1,024 | 777 | 94.9 | |
Panel 17 | Round 2 | 14,181 | 12,567 | 677 | 937 | 88.6 |
Round 3 | 1,395 | 690 | 417 | 288 | 49.5 | |
Combined, 2012 | 14,181 | 13,257 | 1,094 | 1,225 | 93.5 | |
Round 4 | 13,086 | 11,566 | 602 | 918 | 88.4 | |
Round 5 | 1,429 | 655 | 504 | 270 | 45.8 | |
Combined, 2013 | 13,086 | 12,221 | 1,106 | 1,188 | 93.4 | |
Panel 18 | Round 2 | 13,158 | 10,805 | 785 | 1,568 | 82.1 |
Round 3 | 2,066 | 1,022 | 547 | 497 | 48.5 | |
Combined, 2013 | 13,158 | 11,827 | 1,332 | 2,065 | 89.9 | |
Round 4 | 12,243 | 10,050 | 916 | 1,277 | 82.1 | |
Round 5 | 2,063 | 936 | 721 | 406 | 45.4 | |
Combined, 2014 | 12,243 | 10,986 | 1,637 | 1,683 | 89.7 | |
Panel 19 | Round 2 | 12,664 | 10,047 | 1,014 | 1,603 | 79.3 |
Round 3 | 2,306 | 1,050 | 694 | 615 | 44.5 | |
Combined, 2014 | 12,664 | 11,097 | 1,708 | 2,218 | 87.6 | |
Round 4 | 11,782 | 9,542 | 1,047 | 1,175 | 81.0 | |
Round 5 | 2,131 | 894 | 822 | 414 | 42.0 | |
Combined, 2015 | 11,782 | 10,436 | 1,869 | 1,589 | 88.6 | |
Panel 20 | Round 2 | 14,077 | 10,885 | 1,223 | 1,966 | 77.3 |
Round 3 | 2,899 | 1,329 | 921 | 649 | 45.8 | |
Combined, 2015 | 14,077 | 12,214 | 2,144 | 2,615 | 86.8 | |
Round 4 | 13,068 | 10,572 | 1,127 | 1,371 | 80.9 | |
Round 5 | 2,262 | 1,001 | 891 | 370 | 44.3 | |
Combined, 2016 | 13,068 | 11,573 | 2,018 | 1,741 | 88.6 | |
Panel 21 | Round 2 | 13,143 | 10,212 | 1,170 | 1,761 | 77.7 |
Round 3 | 2,585 | 1,123 | 893 | 569 | 43.4 | |
Combined, 2016 | 13,143 | 11,335 | 2,063 | 2,330 | 86.2 | |
Round 4 | 12,021 | 9,966 | 1,149 | 906 | 82.9 | |
Round 5 | 2,078 | 834 | 884 | 360 | 40.1 | |
Combined, 2017 | 12,021 | 10,800 | 2,033 | 1,266 | 89.8 | |
Panel 22 | Round 2 | 12,304 | 9,929 | 1,086 | 1,289 | 80.7 |
Round 3 | 2,287 | 840 | 749 | 698 | 36.7 | |
Combined, 2017 | 12,304 | 10,769 | 1,835 | 1,987 | 87.5 | |
Round 4 | 11,333 | 8,341 | 1,159 | 1,833 | 73.6 | |
Round 5 | 2,090 | 811 | 896 | 383 | 38.8 | |
Combined, 2018 | 11,333 | 9,152 | 2,055 | 2,216 | 80.8 | |
Panel 23 | Round 2 | 12,349 | 8,711 | 1,364 | 1,289 | 70.5 |
Round 3 | 2,364 | 819 | 907 | 638 | 34.6 | |
Combined, 2018 | 12,364 | 9,530 | 2,271 | 1,927 | 77.1 | |
Round 4 | ||||||
Round 5 | ||||||
Combined, 2019 | ||||||
Round 6 | 8,537 | 4,732 | 682 | 3,123 | 55.4 | |
Panel 24 | Round 2 | 12,027 | 8,726 | 1,641 | 1,660 | 72.6 |
Round 3 | 2,810 | 860 | 832 | 1,118 | 30.6 | |
Combined, 2019 | 12,027 | 9,586 | 2,473 | 2,778 | 79.7 | |
Round 4 | 9,257 | 4,247 | 786 | 4,224 | 45.9 | |
Panel 25 | Round 2 | 8,109 | 3,555 | 529 | 4,025 | 43.8 |
* Totals represent combined collection of the SAQ and the parent-administered questionnaire (PAQ).
Panel/round | DCSs requested | DCSs completed | Response rate (%) | |
---|---|---|---|---|
Panel 4 | Round 5 | 696 | 631 | 90.7 |
Panel 5 | Round 3 | 550 | 508 | 92.4 |
Round 5 | 570 | 500 | 87.7 | |
Panel 6 | Round 3 | 1,166 | 1,000 | 85.8 |
Round 5 | 1,202 | 1,166 | 97.0 | |
Panel 7 | Round 3 | 870 | 848 | 97.5 |
Round 5 | 869 | 820 | 94.4 | |
Panel 8 | Round 3 | 971 | 885 | 91.1 |
Round 5 | 977 | 894 | 91.5 | |
Panel 9 | Round 3 | 1,003 | 909 | 90.6 |
Round 5 | 904 | 806 | 89.2 | |
Panel 10 | Round 3 | 1,060 | 939 | 88.6 |
Round 5 | 1,078 | 965 | 89.5 | |
Panel 11 | Round 3 | 1,188 | 1,030 | 86.7 |
Round 5 | 1,182 | 1,053 | 89.1 | |
Panel 12 | Round 3 | 917 | 825 | 90.0 |
Round 5 | 883 | 815 | 92.3 | |
Panel 13 | Round 3 | 1,278 | 1,182 | 92.5 |
Round 5 | 1,278 | 1,154 | 90.3 | |
Panel 14 | Round 3 | 1,174 | 1,048 | 89.3 |
Round 5 | 1,177 | 1,066 | 90.6 | |
Panel 15 | Round 3 | 1,117 | 1,000 | 89.5 |
Round 5 | 1,097 | 990 | 90.3 | |
Panel 16 | Round 3 | 1,425 | 1,283 | 90.0 |
Round 5 | 1,358 | 1,256 | 92.5 | |
Panel 17 | Round 3 | 1,315 | 1,177 | 89.5 |
Round 5 | 1,308 | 1,174 | 89.8 | |
Panel 18 | Round 3 | 1,362 | 1,182 | 86.8 |
Round 5 | 1,342 | 1,187 | 88.5 | |
Panel 19 | Round 3 | 1,272 | 1,124 | 88.4 |
Round 5 | 1,316 | 1,144 | 87.2 | |
Panel 20 | Round 3 | 1,412 | 1,190 | 84.5 |
Round 5 | 1,386 | 1,174 | 84.9 | |
Panel 21 | Round 3 | 1,422 | 1,170 | 82.5 |
Round 5 | 1,481 | 1,177 | 81.0 | |
Panel 22 | Round 3 | 1,453 | 1,074 | 73.9 |
Round 5 | 1,348 | 1,018 | 75.5 | |
Panel 23 | Round 3 | 1,464 | 1,101 | 75.2 |
Round 5 | 1,350 | 933 | 69.1 | |
Panel 24 | Round 3 | 1,350 | 843 | 62.4 |
* Tables represent combined DCS/proxy DCS collection.
Pharmacy | Total number | Total received | Percent received | Total complete | Completes as a percent of total |
---|---|---|---|---|---|
2019 – P22R5 all mail collection | |||||
Total RUs | 921 | 173 | 18.8% | 125 | 13.6% |
Total Pairs | 1,387 | 199 | 14.3% | 183 | 13.2% |
2018 – P21R5 all mail collection | |||||
Total RUs | 2,920 | 417 | 20.7% | 316 | 15.6% |
Total Pairs | 4,116 | 486 | 16.6% | 425 | 14.5% |
2017 – P20R5 all mail collection | |||||
Total RUs | 1,953 | 3424 | 17.5% | 254 | 13.0% |
Total Pairs | 2,723 | 372 | 13.7% | 326 | 12.0% |
2016 – P19R5 all mail collection | |||||
Total RUs | 2,038 | 374 | 18.4% | 285 | 14.0% |
Total Pairs | 2,854 | 430 | 15.1% | 394 | 13.8% |
2015 – P18R5 all mail collection | |||||
Total RUs | 1,404 | 260 | 18.5% | 186 | 13.2% |
Total Pairs | 2,042 | 289 | 14.2% | 255 | 12.5% |
2014 – P17R5 all mail collection | |||||
Total RUs | 2,230 | 372 | 16.7% | 269 | 12.1% |
Total Pairs | 3,233 | 443 | 13.7% | 386 | 11.9% |
2013 – P16R5 all mail collection | |||||
Total RUs | 2,014 | 417 | 20.7% | 316 | 15.6% |
Total Pairs | 2,911 | 486 | 16.6% | 425 | 14.5% |
2012 – P15R5 all mail collection | |||||
Total RUs | 1,390 | 290 | 20.8% | 2036 | 14.6% |
Total Pairs | 1,990 | 348 | 17.4% | 290 | 14.5% |
Reason for call | Spring 2000 (Panel 5 Round 1, Panel 4 Round 3, Panel 3 Round 5) | Fall 2000 (Panel 5 Round 2, Panel 4 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address change | 23 | 4.0 | 13 | 8.3 | 8 | 5.7 |
Appointment | 37 | 6.5 | 26 | 16.7 | 28 | 19.9 |
Request callback | 146 | 25.7 | 58 | 37.2 | 69 | 48.9 |
Refusal | 183 | 32.2 | 20 | 12.8 | 12 | 8.5 |
Willing to participate | 10 | 1.8 | 2 | 1.3 | 0 | 0.0 |
Other | 157 | 27.6 | 35 | 22.4 | 8 | 5.7 |
Report a respondent deceased | 5 | 0.9 | 1 | 0.6 | 0 | 0.0 |
Request a Spanish-speaking interview | 8 | 1.4 | 1 | 0.6 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 16 | 11.3 |
Total | 569 | 156 | 141 |
Reason for call | Spring 2001 (Panel 6 Round 1, Panel 5 Round 3, Panel 4 Round 5) | Fall 2001 (Panel 6 Round 2, Panel 5 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 27 | 3.7 | 17 | 12.7 | 56 | 15.7 |
Appointment | 119 | 16.2 | 56 | 41.8 | 134 | 37.5 |
Request callback | 259 | 35.3 | 36 | 26.9 | 92 | 25.8 |
No message | 8 | 1.1 | 3 | 2.2 | 0 | 0.0 |
Other | 29 | 4.0 | 7 | 5.2 | 31 | 8.7 |
Request SAQ help | 0 | 0.0 | 2 | 1.5 | 10 | 2.8 |
Special needs | 5 | 0.7 | 3 | 2.2 | 0 | 0.0 |
Refusal | 278 | 37.9 | 10 | 7.5 | 25 | 7.0 |
Willing to participate | 8 | 1.1 | 0 | 0.0 | 9 | 2.5 |
Total | 733 | 134 | 357 |
Reason for call | Spring 2002 (Panel 7 Round 1, Panel 6 Round 3, Panel 5 Round 5) | Fall 2002 (Panel 7 Round 2, Panel 6 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 28 | 4.5 | 29 | 13.9 | 66 | 16.7 |
Appointment | 77 | 12.5 | 71 | 34.1 | 147 | 37.1 |
Request callback | 210 | 34.0 | 69 | 33.2 | 99 | 25.0 |
No message | 6 | 1.0 | 3 | 1.4 | 5 | 1.3 |
Other | 41 | 6.6 | 17 | 8.2 | 10 | 2.5 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 30 | 7.6 |
Special needs | 1 | 0.2 | 0 | 0.0 | 3 | 0.8 |
Refusal | 232 | 37.6 | 14 | 6.7 | 29 | 7.3 |
Willing to participate | 22 | 3.6 | 5 | 2.4 | 7 | 1.8 |
Total | 617 | 208 | 396 |
Reason for call | Spring 2003 (Panel 8 Round 1, Panel 7 Round 3, Panel 6 Round 5) | Fall 2003 (Panel 8 Round 2, Panel 7 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 20 | 4.2 | 33 | 13.7 | 42 | 17.9 |
Appointment | 83 | 17.5 | 87 | 36.1 | 79 | 33.8 |
Request callback | 165 | 34.9 | 100 | 41.5 | 97 | 41.5 |
No message | 16 | 3.4 | 7 | 2.9 | 6 | 2.6 |
Other | 9 | 1.9 | 8 | 3.3 | 3 | 1.3 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 1 | 0.4 |
Special needs | 5 | 1.1 | 0 | 0.0 | 0 | 0.0 |
Refusal | 158 | 33.4 | 6 | 2.5 | 6 | 2.6 |
Willing to participate | 17 | 3.6 | 0 | 0.0 | 0 | 0.0 |
Total | 473 | 241 | 234 |
Reason for call | Spring 2004 (Panel 9 Round 1, Panel 8 Round 3, Panel 7 Round 5) | Fall 2004 (Panel 9 Round 2, Panel 8 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 8 | 1.6 | 26 | 13.2 | 42 | 10.9 |
Appointment | 67 | 13.3 | 76 | 38.6 | 153 | 39.7 |
Request callback | 158 | 31.5 | 77 | 39.1 | 139 | 36.1 |
No message | 9 | 1.8 | 5 | 2.5 | 16 | 4.2 |
Other | 8 | 1.6 | 5 | 2.5 | 5 | 1.3 |
Proxy needed | 5 | 1.0 | 2 | 1.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 2 | 0.5 |
Special needs | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Refusal | 228 | 45.4 | 6 | 3.0 | 27 | 7.0 |
Willing to participate | 19 | 3.8 | 0 | 0.0 | 1 | 0.3 |
Total | 502 | 197 | 385 |
Reason for call | Spring 2005 (Panel 10 Round 1, Panel 9 Round 3, Panel 8 Round 5) | Fall 2005 (Panel 10 Round 2, Panel 9 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 16 | 3.3 | 23 | 8.7 | 27 | 6.8 |
Appointment | 77 | 15.7 | 117 | 44.3 | 177 | 44.4 |
Request callback | 154 | 31.4 | 88 | 33.3 | 126 | 31.6 |
No message | 14 | 2.9 | 11 | 4.2 | 28 | 7.0 |
Other | 13 | 2.7 | 1 | 0.4 | 8 | 2.0 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 1 | 0.3 |
Special needs | 1 | 0.2 | 1 | 0.4 | 0 | 0.0 |
Refusal | 195 | 39.8 | 20 | 7.6 | 30 | 7.5 |
Willing to participate | 20 | 4.1 | 3 | 1.1 | 2 | 0.5 |
Total | 490 | 264 | 399 |
Reason for call | Spring 2006 (Panel 11 Round 1, Panel 10 Round 3, Panel 9 Round 5) | Fall 2006 (Panel 11 Round 2, Panel 10 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 7 | 1.3 | 24 | 7.5 | 11 | 4.1 |
Appointment | 61 | 11.3 | 124 | 39.0 | 103 | 38.1 |
Request callback | 146 | 27.1 | 96 | 30.2 | 101 | 37.4 |
No message | 72 | 13.4 | 46 | 14.5 | 21 | 7.8 |
Other | 16 | 3.0 | 12 | 3.8 | 8 | 3.0 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 4 | 0.7 | 0 | 0.0 | 0 | 0.0 |
Refusal | 216 | 40.1 | 15 | 4.7 | 26 | 9.6 |
Willing to participate | 17 | 3.2 | 1 | 0.3 | 0 | 0.0 |
Total | 539 | 318 | 270 |
Reason for call | Spring 2007 (Panel 12 Round 1, Panel 11 Round 3, Panel 10 Round 5) | Fall 2007 (Panel 12 Round 2, Panel 11 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 8 | 2.1 | 21 | 7.3 | 23 | 7.6 |
Appointment | 56 | 14.6 | 129 | 44.8 | 129 | 42.6 |
Request callback | 72 | 18.8 | 75 | 26.0 | 88 | 29.0 |
No message | 56 | 14.6 | 37 | 12.8 | 33 | 10.9 |
Other | 20 | 5.2 | 15 | 5.2 | 6 | 2.0 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 5 | 1.3 | 0 | 0.0 | 1 | 0.3 |
Refusal | 160 | 41.8 | 10 | 3.5 | 21 | 6.9 |
Willing to participate | 6 | 1.6 | 1 | 0.3 | 2 | 0.7 |
Total | 383 | 288 | 303 |
Reason for call | Spring 2008 (Panel 13 Round 1, Panel 12 Round 3, Panel 11 Round 5) | Fall 2008 (Panel 13 Round 2, Panel 12 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 20 | 3.4 | 12 | 4.7 | 21 | 5.7 |
Appointment | 92 | 15.5 | 117 | 45.9 | 148 | 39.9 |
Request callback | 164 | 27.6 | 81 | 31.8 | 154 | 41.5 |
No message | 82 | 13.8 | 20 | 7.8 | 22 | 5.9 |
Other | 13 | 2.2 | 12 | 4.7 | 8 | 2.2 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 4 | 0.7 | 0 | 0.0 | 0 | 0.0 |
Refusal | 196 | 32.9 | 13 | 5.1 | 18 | 4.9 |
Willing to participate | 24 | 4.0 | 0 | 0.0 | 0 | 0.0 |
Total | 595 | 255 | 371 |
Reason for call | Spring 2009 (Panel 14 Round 1, Panel 13 Round 3, Panel 12 Round 5) | Fall 2009 (Panel 14 Round 2, Panel 13 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 10 | 2.2 | 13 | 4.3 | 19 | 5.1 |
Appointment | 49 | 10.8 | 87 | 29.0 | 153 | 41.1 |
Request callback | 156 | 34.4 | 157 | 52.3 | 153 | 41.1 |
No message | 48 | 10.6 | 23 | 7.7 | 20 | 5.4 |
Other | 3 | 0.7 | 8 | 2.7 | 3 | 0.8 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 4 | 0.9 | 0 | 0.0 | 0 | 0.0 |
Refusal | 183 | 40.3 | 11 | 3.7 | 24 | 6.5 |
Willing to participate | 1 | 0.2 | 1 | 0.3 | 0 | 0.0 |
Total | 454 | 300 | 372 |
Reason for call | Spring 2010 (Panel 15 Round 1, Panel 14 Round 3, Panel 13 Round 5) | Fall 2010 (Panel 15 Round 2, Panel 14 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 2 | 0.8 | 42 | 8.2 | 25 | 5.3 |
Appointment | 44 | 18.0 | 214 | 41.6 | 309 | 66.0 |
Request callback | 87 | 35.7 | 196 | 38.1 | 46 | 9.8 |
No message | 17 | 7.0 | 33 | 6.4 | 17 | 3.6 |
Other | 7 | 2.9 | 8 | 1.6 | 14 | 3.0 |
Request SAQ help | 0 | 0.0 | 0 | 0.0 | 12 | 2.6 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 1 | 0.2 |
Special needs | 1 | 0.4 | 1 | 0.2 | 1 | 0.2 |
Refusal | 86 | 35.2 | 20 | 3.9 | 43 | 9.2 |
Willing to participate | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Total | 244 | 514 | 468 |
Reason for call | Spring 2011 (Panel 16 Round 1, Panel 15 Round 3, Panel 14 Round 5) | Fall 2011 (Panel 16 Round 2, Panel 15 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 16 | 3.4 | 46 | 8.0 | 72 | 9.8 |
Appointment | 175 | 37.6 | 407 | 71.0 | 466 | 63.5 |
Request callback | 81 | 17.4 | 63 | 11.0 | 69 | 9.4 |
No message | 24 | 5.2 | 26 | 4.5 | 23 | 3.1 |
Other | 12 | 2.6 | 8 | 1.4 | 25 | 3.4 |
Request SAQ help | 1 | 0.2 | 2 | 0.3 | 32 | 4.4 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 46 | 6.3 |
Special needs | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Refusal | 157 | 33.7 | 21 | 3.7 | 0 | 0.0 |
Willing to participate | 0 | 0.0 | 0 | 0 | 0.0 | |
Total | 466 | 573 | 734 |
Reason for call | Spring 2012 (Panel 17 Round 1, Panel 16 Round 3, Panel 15 Round 5) | Fall 2012 (Panel 17 Round 2, Panel 16 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 18 | 5.0 | 107 | 13.4 | 108 | 12.2 |
Appointment | 130 | 36.1 | 517 | 64.9 | 584 | 65.8 |
Request callback | 60 | 16.7 | 94 | 11.8 | 57 | 6.4 |
No message | 21 | 5.8 | 17 | 2.1 | 18 | 2.0 |
Other | 10 | 2.8 | 25 | 3.1 | 16 | 1.8 |
Proxy needed | 0 | 0.0 | 1 | 0.1 | 2 | 0.2 |
Request SAQ help | 2 | 0.6 | 6 | 0.8 | 42 | 4.7 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 1 | 0.3 | 0 | 0.0 | 0 | 0.0 |
Refusal | 117 | 32.5 | 30 | 3.8 | 60 | 6.8 |
Willing to participate | 1 | 0.3 | 0 | 0.0 | 0 | 0.0 |
Total | 360 | 797 | 887 |
Reason for call | Spring 2013 (Panel 18 Round 1, Panel 17 Round 3, Panel 16 Round 5) | Fall 2013 (Panel 18 Round 2, Panel 17 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 18 | 4.4 | 82 | 10.8 | 53 | 9.0 |
Appointment | 143 | 35.0 | 558 | 73.0 | 370 | 62.6 |
Request callback | 71 | 17.4 | 88 | 11.5 | 70 | 11.8 |
No message | 8 | 2.0 | 11 | 1.4 | 16 | 2.8 |
Other | 2 | 0.5 | 4 | .5 | 5 | 0.9 |
Proxy needed | 1 | 0.2 | 1 | 0.1 | 1 | 0.2 |
Request SAQ help | 1 | 0.2 | 0 | 0.0 | 31 | 5.3 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 2 | 0.5 | 0 | 0.0 | 2 | 0.3 |
Refusal | 162 | 39.5 | 19 | 2.5 | 43 | 7.3 |
Willing to participate | 1 | 0.2 | 1 | 0.1 | 0 | 0.0 |
Total | 409 | 764 | 591 |
Reason for call | Spring 2014 (Panel 19 Round 1, Panel 18 Round 3, Panel 17 Round 5) | Fall 2014 (Panel 19 Round 2, Panel 18 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 11 | 3.2 | 71 | 11.1 | 62 | 8.4 |
Appointment | 75 | 22.1 | 393 | 61.5 | 490 | 66.5 |
Request callback | 70 | 20.6 | 113 | 17.7 | 70 | 9.5 |
No message | 11 | 3.2 | 12 | 1.9 | 28 | 3.9 |
Other | 0 | 0.0 | 5 | 0.8 | 7 | 0.9 |
Proxy needed | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Request SAQ help | 0 | 0.0 | 1 | 0.2 | 4 | 0.5 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Refusal | 165 | 48.5 | 44 | 6.9 | 74 | 10.0 |
Willing to participate | 8 | 2.4 | 0 | 0.0 | 1 | 0.1 |
Total | 340 | 639 | 737 |
Reason for call | Spring 2015 (Panel 20 Round 1, Panel 19 Round 3, Panel 18 Round 5) | Fall 2015 (Panel 20 Round 2, Panel 19 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 10 | 2.3 | 61 | 8.8 | 55 | 9.6 |
Appointment | 95 | 21.8 | 438 | 63.4 | 346 | 60.7 |
Request callback | 85 | 19.5 | 112 | 16.2 | 52 | 9.1 |
No message | 14 | 3.2 | 17 | 2.5 | 4 | 0.7 |
Other | 2 | 0.5 | 3 | 0.4 | 3 | 0.5 |
Proxy needed | 1 | 0.2 | 7 | 1.0 | 8 | 1.4 |
Request SAQ help | 1 | 0.2 | 3 | 0.4 | 11 | 1.9 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Refusal | 206 | 47.2 | 47 | 6.8 | 91 | 16.0 |
Willing to participate | 22 | 5.0 | 3 | 0.4 | 0 | 0.0 |
Total | 436 | 691 | 570 |
Reason for call | Spring 2016 (Panel 21 Round 1, Panel 20 Round 3, Panel 19 Round 5) | Fall 2016 (Panel 21 Round 2, Panel 20 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 8 | 2.7 | 64 | 11.7 | 48 | 7.9 |
Appointment | 93 | 30.9 | 362 | 66.2 | 373 | 61.7 |
Request callback | 47 | 15.6 | 59 | 10.8 | 83 | 13.7 |
No message | 1 | 0.3 | 7 | 1.3 | 6 | 1.0 |
Other | 2 | 0.7 | 1 | 0.2 | 3 | 0.5 |
Proxy needed | 0 | 0.0 | 5 | 0.9 | 6 | 1.0 |
Request SAQ help | 0 | 0.0 | 3 | 0.5 | 11 | 1.8 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 1 | 0.3 | 0 | 0.0 | 0 | 0.0 |
Refusal | 139 | 46.2 | 46 | 8.4 | 75 | 12.4 |
Willing to participate | 10 | 3.3 | 0 | 0.0 | 0 | 0.0 |
Total | 301 | 547 | 605 |
Reason for call | Spring 2017 (Panel 22 Round 1, Panel 21 Round 3, Panel 20 Round 5) | Fall 2017 (Panel 22 Round 2, Panel 21 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 10 | 2.9 | 51 | 9.6 | 35 | 6.8 |
Appointment | 86 | 24.9 | 355 | 66.6 | 318 | 61.4 |
Request callback | 59 | 17.1 | 90 | 16.9 | 64 | 12.4 |
No message | 1 | 0.3 | 2 | 0.4 | 5 | 1.0 |
Other | 2 | 0.6 | 3 | 0.6 | 4 | 0.8 |
Proxy needed | 1 | 0.3 | 7 | 1.3 | 5 | 1.0 |
Request SAQ help | 1 | 0.3 | 0 | 0.0 | 15 | 2.9 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 0 | 0.0 | 1 | 0.2 | 1 | 0.2 |
Refusal | 172 | 49.7 | 23 | 4.3 | 70 | 13.5 |
Willing to participate | 14 | 4.0 | 1 | 0.2 | 1 | 0.2 |
Total | 346 | 533 | 518 |
Reason for call | Spring 2018 (Panel 23 Round 1, Panel 22 Round 3, Panel 21 Round 5) | Fall 2018 (Panel 23 Round 2, Panel 22 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 5 | 1.3 | 37 | 7.9 | 38 | 7.3 |
Appointment | 59 | 15.4 | 318 | 68.1 | 335 | 63.9 |
Request callback | 50 | 13.1 | 50 | 10.7 | 60 | 11.5 |
No message | 4 | 1.0 | 5 | 1.1 | 1 | 0.2 |
Other | 0 | 0.0 | 1 | 0.2 | 3 | 0.6 |
Proxy needed | 2 | 0.5 | 4 | 0.9 | 6 | 1.1 |
Request SAQ help | 0 | 0.0 | 1 | 0.2 | 15 | 2.9 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Special needs | 1 | 0.3 | 0 | 0.0 | 0 | 0.0 |
Refusal | 211 | 55.1 | 46 | 9.9 | 61 | 11.6 |
Willing to participate | 51 | 13.3 | 5 | 1.1 | 5 | 1.0 |
Total | 383 | 467 | 524 |
Reason for call | Spring 2019 (Panel 24 Round 1, Panel 23 Round 3, Panel 22 Round 5) | Fall 2019 (Panel 24 Round 2, Panel 23 Round 4) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2 and 4 | ||||
N | % | N | % | N | % | |
Address/telephone change | 5 | 1.5 | 36 | 7.4 | 30 | 5.6 |
Appointment | 59 | 17.2 | 328 | 67.5 | 344 | 64.8 |
Request callback | 39 | 11.4 | 56 | 11.5 | 56 | 10.5 |
No message | 2 | 0.6 | 4 | 0.8 | 7 | 1.3 |
Other | 2 | 0.6 | 4 | 0.8 | 0 | 0.0 |
Proxy needed | 2 | 0.6 | 6 | 1.2 | 11 | 2.1 |
Request SAQ help | 0 | 0.0 | 2 | 0.4 | 5 | 0.9 |
SAQ refusal | 0 | 0.0 | 48 | 9.9 | 0 | 0.0 |
Special needs | 0 | 0.0 | 0 | 0.0 | 0 | 0.0 |
Refusal | 185 | 53.9 | 0 | 0.0 | 78 | 14.7 |
Willing to participate | 49 | 14.3 | 2 | 0.4 | 0 | 0.0 |
Total | 353 | 486 | 531 |
Reason for call | Spring 2020 (Panel 25 Round 1, Panel 24 Round 3, Panel 23 Round 5) | Fall 2020 (Panel 25 Round 2, Panel 24 Round 4, Panel 23 Round 6) | ||||
---|---|---|---|---|---|---|
Round 1 | Rounds 3 and 5 | Rounds 2, 4, and 6 | ||||
N | % | N | % | N | % | |
Address/telephone change | 5 | 0.9 | 37 | 6.3 | 28 | 2.4 |
Appointment | 142 | 24.2 | 332 | 56.1 | 278 | 23.9 |
Request callback | 102 | 17.4 | 121 | 20.4 | 276 | 23.7 |
No message | 22 | 3.8 | 18 | 3.0 | 60 | 5.2 |
Other | 2 | 0.3 | 5 | 0.8 | 5 | 0.4 |
Proxy needed | 6 | 1.0 | 3 | 0.5 | 10 | 0.9 |
Request SAQ help | 0 | 0.0 | 1 | 0.2 | 35 | 3.0 |
SAQ refusal | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Special needs | 0 | 0.0 | 0 | 0.0 | 1 | 0.1 |
Refusal | 209 | 35.7 | 62 | 10.5 | 203 | 17.5 |
Willing to participate | 98 | 16.7 | 13 | 2.2 | 266 | 22.9 |
Total | 586 | 592 | 1,163 |
Date | Description |
---|---|
1/2/2020 | UEGN3562.03: Delivery of the 2nd set 2018 Pre-Imputation Files: ER, HS, MVE and OP |
1/2/2020 | WGTS5009.01: Delivery of Person-Level Use PUF Weight, Single Panel Person Weight, and MSA18_13 Variables for FY18 |
1/3/2020 | EMPL2210.01: Unweighted Medians for the 2019 Point-in-Time Hourly Wage Variable |
1/3/2020 | HINS1297.01: Delivery of the 2018 HINS Building Block Variables and COVERM Tables for Panel 22 Rounds 3 - 5 and Panel 23 Rounds 1 – 3 |
1/3/2020 | HINS1298.01: Delivery of the 2018 HINS Month-by-Month, Tricare plan, Private, Medicare, and Medicaid HMO/Gatekeeper, and PMEDIN/DENTIN Variables |
1/3/2020 | HINS1299.01: Delivery of the FY 2018 HINS Medicare Part D supplemental variables |
1/3/2020 | HLTH1043.02: Redelivery of 2018 BMI Cross-tabulations and Frequencies |
1/3/2020 | HLTH1044.01: Delivery of Adult and Child Height and Weight for the MEPS Master Files for FY 2018 |
1/3/2020 | PRPL0133.24: FY18 PRPL Specifications Coverage Record and HMO Variables, JOBS Link and Variable Editing, and Variable Editing: Post JOBS Linking |
1/3/2020 | PCND015601: 2018 Person-Level Priority Conditions Cross-Tabulations |
1/6/2020 | EMPL2211.01: Full Year 2018 Jobs File Codebook Format Question |
1/6/2020 | PRPL0134.01: Output and Frequencies from 2018 PRPL Program #1 |
1/6/2020 | UEGN3573.01: Feedback on the RTI's FY2018 MPC Test Files |
1/7/2020 | PRPL0133.26: FY18 PRPL Specifications Coverage Record and HMO Variables, JOBS Link and Variable Editing, and Variable Editing: Post JOBS Linking |
1/7/2020 | UEGN3574.01: The 2018 Utilization Standard Error Benchmarking Tables Using Person Use PUF Weights Revised- PERWT18P |
1/8/2020 | ADMN0911.01: Delivery of 2018 FAMID Variables and CPS Family Identifier |
1/8/2020 | DOCM0678.02: Delivery of the 2019 MPC files for Sample selection - Wave 1 |
1/8/2020 | DOCM0679.02: Delivery of the 2019 PC Sample file - Wave 1 |
1/9/2020 | DOCM067803: Delivery of the 2019 MPC files for Sample selection - Wave 1 supplemental |
1/9/2020 | PRPL0133.28: FY18 PRPL Specifications Coverage Record and HMO Variables, JOBS Link and Variable Editing, and Variable Editing: Post JOBS Linking |
1/10/2020 | DOCM0681.01: Delivery of the 2020 NPI Provider Directory from the Panel 25 MEPS Laptop |
1/10/2020 | GNRL2189.01: FY 2018 (Panel 22 and Panel 23) Snapshots of HC Source Tables Including the CONDX, JOBSX, SAQ, and DCS Tables |
1/14/2020 | GNRL2193.01 and GNRL2194.01: Delivery of End-Of-Round files (RU level and Person level) -P23R4 |
1/14/2020 | GNRL2193.02 and GNRL2194.02: Delivery of End-Of-Round files (RU level and Person-level) -P24R2 |
1/14/2020 | GNRL3003.01: NCHS Checklist and Preliminary Version of the 2018 JOBS File Delivery Document for Review |
1/14/2020 | GNRL3002.01: NCHS Checklist and FY2018 Use PUF Preliminary Delivery Document |
1/15/2020 | DEMO1016.02: Delivery of the Output Listings for Final Case Review of the MOPID and DAPID Variables’ Construction for FY2018 |
1/15/2020 | PCND0156.02: Redelivery for 2018 Person-Level Priority Conditions Cross-Tabulations |
1/15/2020 | UEPD1210.06: 2018 INSURC18 variable for use in the Prescribed Medicines Imputation |
1/17/2020 | COND0979.01: 2018 Preliminary Conditions File Specifications |
1/17/2020 | PRPL0135.01: FY18 PRPL Specifications for the OOPELIG, Imputation and final file creation programs |
1/21/2020 | EMPL2212.01: Full Year 2018 Jobs File Codebook Information |
1/21/2020 | PRPL0134.04: Output and Frequencies from 2018 PRPL Program #1 |
1/21/2020 | PRPL0136.01: Output and Frequencies from 2018 PRPL Program #2 |
1/22/2020 | GNRL3004.01: Preliminary Versions of the Codebook and Delivery Document of the FY 2018 Use PUF for Use in AHRQ and NCHS Review |
1/22/2020 | GNRL3005.01: Preliminary Version of the 2018 JOBS File Codebook and Delivery Document for AHRQ and NCHS Review |
1/22/2020 | PRPL0134.06: Output and Frequencies from 2018 PRPL Program #1 |
1/23/2020 | UEGN 2803.01: 2018 High payment to charge ratio HHA events |
1/23/2020 | WGTS1874.01: Creation of the Delivery File for the 2017 PIT P21R3/P22R1 Preliminary Individual Panel Person and Family Weights and Preliminary (draft) Variance Strata and PSU |
1/23/2020 | WGTS1876.01: Creation of the Delivery Files for the 2017 PIT P21R3/P22R1: PUF and Internal Files. These files cover the person- and family-level weights, location, and variance estimation variables |
1/24/2020 | PRPL0136.10: Output and Frequencies from 2018 PRPL Program #2 |
1/27/2020 | DEMO1016.03: Delivery of the MOPID and DAPID Variables for FY2018 |
1/28/2020 | GNRL3005.02: Final Versions of the 2018 JOBS File Codebook and Delivery Document |
1/28/2020 | GNRL3004.02: Final Versions of the 2018 Use PUF Codebook and Document for Use in AHRQ and NCHS Review |
2/3/2020 | ADMN0915.01: ADMN-DEMO Basic Edit Specs delivery |
2/3/2020 | PRPL0137.01: Output and Frequencies from 2018 PRPL Program #3a |
2/4/2020 | PRPL0137.02: Output from 2018 PRPL Program #3a – Duplicate Union/Employer Review Spreadsheets |
2/6/2020 | COND0979.23: 2018 Preliminary Conditions File Specifications |
2/7/2020 | WGTS1951.01: Panel 24 DU weights review output |
2/10/2020 | UEGN3571.02: Variable List for the 2018 MPC EXP PUF files (OP, OB, IP and ER) |
2/11/2020 | WGTS1966.01: Panel 24 Round 1 Family weight review output |
2/11/2020 | EMPL2215.01: PIT2019 Panel 24 Round 1/Panel 23 Round 3 Editing of High Wage Outliers – Request for Approval |
2/12/2020 | WGTS1967.01: Panel 24/Round 1 person weights review output |
2/12/2020 | WGTS1969.01: Panel 23/Round 3 person weights review output to AHRQ |
2/13/2020 | WGTS1974.01: Panel 23/Round 3 family weight review output to AHRQ |
2/14/2020 | GNRL3007.01: HC-203: 2018 Jobs Public Use File Delivery for Web Release |
2/14/2020 | WGTS1879.01: Derivation of the 2016 Full Year Expenditure Family Weight, MEPS and CPS-Like, for Panel 20 and Panel 21 Combined |
2/14/2020 | WGTS1946.01: Deriving Location Variables (Region and MSA) for Panels 22 and 23, Full Year 2018, based on GEO FIPS Codes, using OMB MSA definitions of both Year 2018 and the Current (2019) Year |
2/17/2020 | WGTS1901.01: MEPS Panel 22 Round 3 - Creation of Family-Level Weights |
2/17/2020 | WGTS1918.01: MEPS, Combined Panel 22/Round 3 and Panel 23/Round 1, Computation of the Composite Family Weights |
2/17/2020 | WGTS1939.01: Derivation of the 2017 Full Year Expenditure Family Weight, MEPS and CPS-Like, for Panel 21 and Panel 22 Combined |
2/17/2020 | WGTS1952.01: MEPS Computation of the Person and Family Poststratification Control Totals for March 2019 from the March 2019 CPS (including the poverty level variable) |
2/18/2020 | WGTS1953.01: MEPS Computation of the Person and Family Poststratification Control Totals for December 2018 from the March 2019 CPS (including the poverty level variable) |
2/18/2020 | WGTS1956.01: Creation of CPS Control Total Files Containing the Raking Dimensions for the Full Year 2018 USE Person Weights |
2/18/2020 | WGTS1959.01: Developing Panel 23 Self-Administered Questionnaire (SAQ) Use Weights for Full Year 2018 |
2/19/2020 | GNRL3006.01: HC-204: Delivery of the Full Year 2018 Use PUF for Web Release |
2/19/2020 | GNRL3008.01: Preliminary Version of the 2019 Point-in-Time File |
2/19/2020 | WGTS1947.01: Derivation of the Annualized MEPS Families and Identification of the Responding MEPS Families for MEPS Panel 23 Full Year 2018 |
2/19/2020 | WGTS1951.01: MEPS Panel 24 Round 1 – DU Level Weights |
2/19/2020 | WGTS1960.01: Developing Panel 22 Self-Administered Questionnaire (SAQ) Use Weights for Full Year 2018 |
2/19/2020 | WGTS1961.01: Deriving location variables (Region and MSA) for Panel 24 Round 1, based on GEO FIPS Codes, using the OMB MSA definitions of both year 2013 and the most recent OMB MSA updates |
2/19/2020 | WGTS1962.01: Creation of CPS Control Total Files Containing the Raking Dimensions for the Full Year 2018 Self-Administered Questionnaire (SAQ) Use Person Weight |
2/19/2020 | WGTS1964.01: Developing Sample Weights for the MEPS Self-Administered Questionnaire (SAQ) for the Panels 22 and 23 Full Year 2018 Use File (PUF), and Creating the Full Year 2018 Person Use SAQ Weights Delivery File |
2/19/2020 | WGTS1966.01: MEPS Panel 24 Round 1 – Family-Level Weights |
2/19/2020 | WGTS1967.01: MEPS Panel 24 Round 1 – Person-Level Weights |
2/19/2020 | WGTS1974.01: MEPS Panel 23 Round 3 - Creation of Family-Level Weights |
2/19/2020 | WGTS1975.01: Creation of the Delivery File for the 2019 PIT P23R3/P24R1 Preliminary Individual Panel Person and Family Weights |
2/20/2020 | UEGN 2794.01: 2018 HHA Outlier Donor and Recipient Events |
2/20/2020 | WGTS1949.01_Do_Not_Email: Derivation of MEPS Panel 22 Full Year 2018 Person Use Weights (Rounds 3-5) |
2/20/2020 | WGTS1954.01: Create the P22P23 Full Year 2018 “Base Weight” and the Location Variable Delivery File |
2/20/2020 | WGTS1976.01: Delivery File Providing a Linkage between the Person Records Sampled for MEPS Panel 24 and the Person Records in the 2018 NHIS Weights File |
2/20/2020 | WGTS1977.01: PIT 2019 P23R3/P24R1 Person weights review output |
2/21/2020 | PRPL0138.01: 2018 PRPL Review of JOBID QC |
2/21/2020 | WGTS1978.01: PIT 2019 P23R3/P24R1 Family weights review output to AHRQ |
2/24/2020 | GNRL3009.01: FY 2018 Person-Level Consolidated PUF Variable List Changes for AHRQ Review |
2/26/2020 | WGTS1938.01: Food Security Weights for MEPS Panels 21 and 22 Full Year 2017 |
2/26/2020 | WGTS1955.01: Derivation of the annualized MEPS Families and Identification of the Responding MEPS Families for the Panel 22 Full Year 2018 |
2/27/2020 | UEGN2802.01: 2018 Household reported OP and HS events with unexpected expenditures |
2/27/2020 | WGTS1923.01: Raking Panels 21 and 22 (Panel 21/rounds 3-5 and Panel 22/rounds 1-3) Separately for the Individual Panel Full Year 2017 Person-Level Weights Including the Poverty Status |
2/27/2020 | WGTS1958.01: MEPS: Establishing Variance Estimation Strata and PSUs, and Estimating Standard Errors Using SUDAAN for the Full Year 2018 PUF, Panel 22, Rounds 3-5 and Panel 23, Rounds 1-3 |
2/28/2020 | EMPL2216.02: Point-In-Time 2019 Hourly Wage Top Code Value |
2/28/2020 | EMPL2217.01: Employment Person-Level Variable Specifications for the Full Year 2019 Population Characteristics/Consolidated PUFs |
2/28/2020 | WGTS5010.01: Delivery of 2019 Point-in-Time Person-Level and Family-Level |
3/2/2020 | EMPL2216.04: Point-In-Time 2019 Hourly Wage Top Code Value |
3/2/2020 | HLTH1046.03: Delivery of VA-SAQ Datasets for FY 2018 |
3/2/2020 | PRPL0139.01: Output and Frequencies from 2018 PRPL Program # 3b |
3/3/2020 | GNRL3009.02: FY 2018 Person-Level Consolidated PUF Variable List Changes for AHRQ Review |
3/3/2020 | HINS1301.01: 2019 HINS Point-in-Time Delivery Preliminary Data File for Benchmarking |
3/3/2020 | PRPL0140.01: Output and Frequencies from 2018 PRPL Program #4 |
3/3/2020 | UEGN 2799.01: 2018 Benchmark Tables: Initial Delivery |
3/3/2020 | UEGN3577.01: The 2018 DN/HHP/OM/HHA Events Final Imputation Files |
3/4/2020 | ADMN0916.01: Point-in-Time 2019 Weighted ADMN-DEMO Crosstabs delivery |
3/4/2020 | WGTS1945.01: Delivery Files for the FY 2017 Individual Panel Expenditure Person-Level Weights, Panel 21 and Panel 22 |
3/4/2020 | WGTS1948.01_Do_Not_Email: Derivation of the MEPS Panel 23 Full Year 2018 Person Use Weights (Rounds 1-3) |
3/4/2020 | WGTS1968.01: Panel 22 Full Year 2018: Derivation of Eligibility and Response Indicators for the CPS-like Families |
3/5/2020 | COND0979.32: 2018 Preliminary Conditions File Specifications |
3/6/2020 | CODE0914.01: 2018 File of GEO Coded Addresses for the MEPS Master Files |
3/6/2020 | EMPL2218.01: Delivery of the Pre-Top Coded Version of the Point-in-Time Hourly Wage Variables for 2019 Point-in-Time |
3/9/2020 | INCO0751.01: Delivery of the 2019 NHIS Link File |
3/9/2020 | WGTS5012.01: Delivery of MEPS Panel 24 DU Weighting Master File |
3/10/2020 | GNRL3011.01: NCHS Checklist and Preliminary Version of the 2019 Point-in-Time Delivery Document |
3/10/2020 | HLTH1046.05: Full Year 2019 HLTH Basic Edit Specifications |
3/10/2020 | WGTS5011.01: Internal Use File Used for the Weights Development for 2019 Point-in-Time |
3/11/2020 | UEGN 2799.02 2018 Benchmark Tables: Second Delivery |
3/11/2020 | UEGN3577.02: The 2018 MVN Final Imputation File |
3/12/2020 | CODE0916.01: PMED - Delivery of FY18 PMED Authority Table for the CLIN 0003K Longitudinal Survey Tool Discussion |
3/12/2020 | CODE1241.01: MEPS Delivery of Updated Internal Conditions Files for 2016 and 2017 |
3/13/2020 | GNRL1993.02: HC-201: Full Year 2017 Consolidated Use, Expense, and Insurance PUF Delivery for Web Release – Updated |
3/13/2020 | HINS1303.01: Delivery of the Revised Specifications for the FY2019 HINS Variables |
3/13/2020 | PRPL0140.08: Output and Frequencies from 2018 PRPL Program #4 |
3/14/2020 | EMPL2217.04: Employment Person-Level Variable Specifications for the Full Year 2019 Population Characteristics/Consolidated PUFs |
3/16/2020 | DSDY0059.01: Delivery of the DSDY Variable Specifications FY19 for AHRQ Approval |
3/18/2020 | ACCS0190.01: 2019 ACCS Constructed Variable Specifications |
3/18/2020 | EMPL2217.06: Employment Person-Level Variable Specifications for the Full Year 2019 Population Characteristics/Consolidated PUFs |
3/18/2020 | EMPL2217.06: Employment Person-Level Variable Specifications for the Full Year 2019 Population Characteristics/Consolidated PUFs |
3/18/2020 | GNRL3013.01: Preliminary Versions of the Codebook and Delivery Document of the 2019 Point-in-Time PUF for Use in AHRQ and NCHS Review |
3/18/2020 | WGTS1980.01: FY2018 combined Panels expenditure person weight review output |
3/19/2020 | PRPL0140.17: Output and Frequencies from 2018 PRPL Program #4 |
3/23/2020 | DSDY0059.03: Delivery of the DSDY Variable Specifications FY19 for AHRQ Approval |
3/24/2020 | EMPL2217.09: Employment Person-Level Variable Specifications for the Full Year 2019 Population Characteristics/Consolidated PUFs |
3/24/2020 | HLTH1045.15: Delivery of VA-SAQ Frequencies prior to variable construction for FY 2018 |
3/25/2020 | PRPL0137.09: Output and Frequencies from 2018 PRPL Program #3a |
3/25/2020 | WGTS1963.01: MEPS Panels 22 and 23 Full Year 2018: Combine and Rake the P22 and P23 Weights to Obtain the P22P23FY18 Person-Level USE Weights |
3/25/2020 | WGTS1981.01: MEPS Panel 24 Round 1 – Creation of DU Weighting Master File Delivery |
3/26/2020 | WGTS1969.01: MEPS Panel 23 Round 3 - Creation of Person-Level Weights |
3/27/2020 | HINS1303.06: Delivery of the Revised Specifications for the FY2019 HINS Variables |
3/30/2020 | DSDY0060: FY 2019 Disability Days Basic Edit Specifications |
3/30/2020 | PCND0158.01: 2019 PCND Constructed Variable Specification |
3/31/2020 | WGTS1977.01: MEPS, Combined Panel 23/Round 3 and Panel 24/Round 1, Computation of the Composite Person Weights |
4/1/2020 | HINS1306.01: Changes to the FY 2019 HINS Basic and Inter-Round Edit specifications |
4/2/2020 | PRPL0142.01: OOPIMPCT settings on State Exchange records linked to JOBS |
4/3/2020 | ADMN0917.01: ADMN/DEMO FY19 constructed variable specs delivery |
4/3/2020 | WGTS5013.01: Delivery of the FY 2018 Expenditure File Original Person Weight |
4/3/2020 | WGTS1980.01: Panel 22 and Panel 23 Combined, Full Year 2018: Raking Person Weights Including the Poverty Status to Obtain the Expenditure Person Weights |
4/6/2020 | EMPL2219.01: Full Year 2019 Employment Source Variable Editing Specifications |
4/7/2020 | DOCM0678.04: Delivery of the 2019 MPC files for Sample selection - Wave 2 |
4/7/2020 | DOCM0679.05: Delivery of the 2019 PC Sample file - Wave 2 |
4/8/2020 | COND0982.01: Delivery of the Specifications for the FY 2018 Conditions PUF |
4/8/2020 | UEGN 2799.03: 2018 Benchmark Tables: Final Delivery |
4/8/2020 | UEGN3577.03 - The 2018 Final Imputation Files: ER, HS, MVE, OP and SBD |
4/9/2020 | PRPL0143.01: Delivery of the FY 2018 OOPELIG2 Dataset for Approval |
4/10/2020 | GNRL3014.01: HC205: 2019 Point-in-Time PUF Delivery for Web Release |
4/14/2020 | DSDY0060.03: FY 2019 Disability Days Basic Edit Specifications |
4/14/2020 | GNRL3015.01: NCHS Checklists and Preliminary Versions of Documents for the FY 2018 Non-MPC Event (DV, OM, and HH) PUFs |
4/14/2020 | GNRL3016.01: NCHS Checklist and Preliminary Version of the 2018 Conditions File Delivery Document and Recode Materials for Review |
4/14/2020 | PCND0158.05: 2019 PCND Constructed Variable Specification |
4/16/2020 | HINS1306.04: Changes to the FY 2019 HINS Basic and Inter-Round Edit specifications |
4/16/2020 | UEGN3571.03: 2018 MPC EXP PUF files (OP, OB, ER and IP): Guidance for disclosure risk recoding |
4/16/2020 | UEGN3578.01: The Insurance Coverage Variable INSCV7&YY in the (YY)/(Prior YY) QC Finding Tables of PUF Event Expenditures |
4/17/2020 | HINS1306.09: Changes to the FY 2019 HINS Basic and Inter-Round Edit specifications |
4/22/2020 | GNRL3018.01: FY 2018 Preliminary Conditions File, Codebook and Delivery Document |
4/20/2020 | UEGN3574.04 -The 2018 Utilization Standard Error Benchmarking Tables Using Expenditure File Person Original Weight- PERWT18F |
4/20/2020 | WGTS1970.01: Panel 23 Full Year 2018: Derivation of Eligibility and Response Indicators for the CPS-like Families |
4/23/2020 | GNRL4003.01: Delivery of the File Containing Variables Recoded or Dropped from the 2019 Point-In-Time PUF Due to DRB Review – P23/P24 |
4/23/2020 | PRPL0144.01: Delivery of the FY 2018 PRPL Hot Deck Imputation Results for Approval |
4/23/2020 | PRPL0145.01: Revising Program 3a for Testing – JOBS linking |
4/24/2020 | GNRL4004.01: Delivery of the File Containing Variables Recoded or Dropped from the USE PUF Due to DRB Review – P22/P23 |
4/27/2020 | WGTS1987.01: Full Year 2018 combined Panels SAQ expenditure person weight for the Consolidated PUF, Review Output |
4/30/2020 | PRPL0144.07: Delivery of the FY 2018 PRPL Hot Deck Imputation Results for Approval - donors used multiple times |
4/30/2020 | PRPL0144.09: Delivery of the FY 2018 PRPL Hot Deck Imputation Results for Approval |
4/30/2020 | UEGN2808.01: 2018 Predictive Mean Matching Imputation Method Applied to the Expenditure Imputation of the non-MPC Event Types |
4/30/2020 | UEGN2809.01: 2018 Predictive Mean Matching Imputation Method Applied to the Expenditure Imputation of the MPC Event Types |
5/4/2020 | COND0983.01: 2019 Conditions Basic Edit Specifications |
5/4/2020 | PCND0159.01: 2019 PCND Basic Edit Specifications |
5/5/2020 | PRPL0144.02: Delivery of the FY 2018 PRPL Hot Deck Imputation Results for Approval - donors used multiple times |
5/5/2020 | WGTS1982.01: FY2018 Consolidated PUF Family Weights Review Output |
5/6/2020 | ACCS0191.01: 2019 ACCS Basic Edit Specifications |
5/6/2020 | COND0982.03: Delivery of the Specifications for the FY 2018 Conditions PUF |
5/12/2020 | COND0983.05: 2019 Conditions Basic Edit Specifications |
5/12/2020 | GNRL3019.01: NCHS Checklists and Preliminary Versions of Documents for the FY 2018 MPC Event (IP, ER, OP, OB) PUFs |
5/13/2020 | WGTS1985.01: Full Year 2018 Panel 22 SAQ Expenditure person weight review output |
5/13/2020 | WGTS1986.01: Full Year 2018 Panel 23 SAQ Expenditure person weight review output |
5/14/2020 | PRPL0144.35: Delivery of FY 2018 PRPL Test Hot Deck Imputation Results for Review |
5/15/2020 | GNRL3020.01: HC-206b, HC-206c, and HC-206h: 2018 Expenditure Event PUFs for Non-MPC Event Types (DV, OM, and HH) and All Related Files for Web Release |
5/15/2020 | PCND0159.04: 2019 PCND Basic Edit Specifications |
5/15/2020 | WGTS5014.01: Delivery of the Individual Panel Raked Person Weights for P22/P23 FY18 |
5/20/2020 | COND0984.01: FY 2018 Preliminary CLNK File |
5/20/2020 | GNRL3022.01: Preliminary Versions of the 2018 MPC Event (IP, ER, OP, OB) PUF Codebooks and Documents for Use in AHRQ and NCHS Review |
5/20/2020 | PRPL0144.39: Delivery of FY 2018 PRPL Test Hot Deck Imputation Results for Review |
5/20/2020 | UEPD1215.02: Delivery of the 2018 PMED PUF (RX18V01 and RX18V02) |
5/20/2020 | UEPD1215.03: Delivery of 2018 PMED PUF (TC18XTABS.lst, TC18XTABS.xml) |
5/21/2020 | PRPL0144.42: Delivery of FY 2018 PRPL Additional Hot Deck Imputation Results for Review |
5/22/2020 | PCND0159.06: 2019 PCND Basic Edit Specifications |
5/27/2020 | PRPL0144.52: Delivery of FY 2018 PRPL Revised Hot Deck Imputation Results for Review – Version 3, Final |
5/27/2020 | PRPL0144.55: Delivery of FY 2018 PRPL Revised Hot Deck Imputation Results for Review – Version 3, Final |
5/28/2020 | PRPL0144.58: Delivery of FY 2018 PRPL Revised Hot Deck Imputation Results for Review – Version 3, Final |
5/29/2020 | WGTS1988.01: FY2018 Expenditure DCS weight review output to AHRQ |
6/1/2020 | UEPD1215.06: Delivery of the 2018 PMED PUF (RX18V05.LST, RX18V06.LST, RX18V05X.LST, TOP10RX18_USE.LST, TOP10TC18_USE.LST, TOP10TC18_EXP.LST, TOP25RX18_EXP.LST) |
6/8/2020 | GNRL2189.02: Addendum to the FY 2018 (Panel 22 & Panel 23) Delivery Database Snapshots: Edited Segments since the Previous Delivery of 1/10/20 |
6/8/2020 | HLTH1049.01: Delivery of VA-SAQ Constructed Variables Dataset for FY 2018 |
6/8/2020 | UEPD1215.07: Delivery of 2018 PMED PUF (RX18V05X) SAS dataset and the format files (RX18V05X.sas7bcat, rx18v05xf.sas and rxexpf2.sas) |
6/8/2020 | WGTS1987.01: P22P23 FY2018 Person-level SAQ Expenditure Weights |
6/9/2020 | GNRL3021.01: NCHS Checklist and Preliminary Version of Delivery Document for the FY 2018 Prescribed Medicines (PMED) PUF |
6/9/2020 | PRPL0146.01: Delivery of the FY 2018 OOPELIG3 Dataset, Benchmarking results, POSTIMPFIN results for final approval of OOPPREM variables, the Preliminary Encrypted Delivery Dataset, and the Preliminary Unencrypted Delivery Dataset |
6/9/2020 | WGTS1989.01: Developing Sample Weights for the MEPS Veteran Self-Administered Questionnaire (VSAQ) Component for the Full Year 2018 Consolidated (Expenditure) Public Use File |
6/10/2020 | WGTS5015.01: Delivery of the Individual Panel 22 and Panel 23 SAQ Expenditure Weight for FY2018 |
6/10/2020 | WGTS5016.01: Delivery of the Poverty-Adjusted Family-Level Weight, CPS-Like Family-Level Weight, Poverty-Adjusted DCS and SAQ Weights for FY2018 |
6/11/2020 | WGTS5007.02: Redelivery of the Variance Strata and PSU Variables for FY2018 |
6/12/2020 | GNRL3023.01: HC-206d, HC-206e, HC-206f, and HC-206g: 2018 Expenditure Event PUFs for MPC Event Types (IP, ER, OP, and OB) and All Related Files for Web Release |
6/12/2020 | PCND0160.01: 2018 Priority Conditions Benchmarking Document |
6/15/2020 | UEPD1215.08: Redelivery of the 2018 PMED PUF (RX18V05X) SAS dataset |
6/15/2020 | UEPD1215.09: 2018 PMED PUF data (RX18V06.sas7bdat) and the format files ((RX18V06.sas7bcat, rxexpv06f.sas and rxexpv06f2.sas) |
6/15/2020 | WGTS5017.01: Delivery of the FY 2018 Expenditure File Final Person Weight – PERWT18F |
6/16/2020 | GNRL3024.01: FY2019 Person-Level Use PUF Variable List Changes for AHRQ Review |
6/16/2020 | UEGN3581.01: Delivery of the FY2018 PMM Imputation Input and Output Data Files |
6/19/2020 | GNRL4009.01 and GNRL4010.01: Delivery of End-Of-Round files (RU level and Person level) -P23R5 |
6/19/2020 | UEGN3582.01: Delivery of the Dropped Variables Due to DRB Review – FY18 EXP PUF files for DV, ER, OP, OB, IP and RX |
6/22/2020 | PRPL0146.03: Delivery of the FY 2018 OOPELIG3 Dataset, Benchmarking results, POSTIMPFIN results for final approval of OOPPREM variables, the Preliminary Encrypted Delivery Dataset, and the Preliminary Unencrypted Delivery Dataset |
6/23/2020 | GNRL3025.02: Preliminary Versions of the 2018 Prescribed Medicines (PMED) Event PUF Codebook and Delivery Document for Use in AHRQ and NCHS Review – UPDATED |
6/23/2020 | PRPL0146.11: Delivery of the FY 2018 OOPELIG3 Dataset, Benchmarking results, POSTIMPFIN results for final approval of OOPPREM variables, the Preliminary Encrypted Delivery Dataset, and the Preliminary Unencrypted Delivery Dataset |
6/26/2020 | GNRL3024.02: FY2019 Person-Level Use PUF Variable List Changes for AHRQ Review |
6/29/2020 | CODE124301: PMED Matching Programs Log and LST Files for FY19 Wave 1 - For Rebecca Ahrnsbrak |
6/30/2020 | WGTS5018.01: Delivery of FY18 Veteran Self-Administered Questionnaire Weight, VSAQW18F, for Expenditure Files |
7/2/2020 | HINS1311.01: Changes to the HINS Point-In-Time 2020 specifications |
7/6/2020 | CODE0919.01: FY19 SOP Matched SOPCODE Inconsistencies For AHRQ’s Review |
7/6/2020 | UEGN3584.01: The 2018/2017 QC Finding Tables of the PUF Event Expenditures |
7/10/2020 | GNRL3026.01: HC-206a: Delivery of the 2018 Prescribed Medicines (PMED) PUF and all Related Files for Web Release |
7/13/2020 | HLTH1050.01: FY2016, FY2017 PUF Population of the CSAQ and DCS Eligibility Variables |
7/14/2020 | GNRL3027.01: NCHS Checklist and Preliminary Version of Delivery Document for the FY 2018 Person Round Plan (PRPL) PUF |
7/14/2020 | GNRL3028.01: NCHS Checklist and Preliminary Version of the Delivery Document for the FY 2018 Consolidated Data PUF |
7/15/2020 | DOCM0679.04: Delivery of the 2019 PC Sample file - Wave 3 |
7/15/2020 | DOCM0680.04: Delivery of the 2019 Provider file for NPI coding - Wave 3 |
7/15/2020 | DOCM0678.05: Delivery of the 2019 MPC files for Sample selection - Wave 3 |
7/15/2020 | HLTH1051.03: VSAQ 2018 Low Frequency Counts |
7/22/2020 | GNRL3029.01: FY 2018 Conditions PUF Preliminary Versions of Codebook and Delivery Document for Use in AHRQ Review |
7/22/2020 | GNRL3030.01: Preliminary versions of the Codebook and Document for the FY 2018 Consolidated Data PUF for Use in AHRQ and NCHS Review |
7/22/2020 | GNRL3031.01: HC209: Preliminary Version of the 2018 Consolidated File |
7/22/2020 | GNRL3032.01: FY 2018 Person Round Plan PUF Preliminary Versions of Codebook and Delivery Document for Use in AHRQ and NCHS Review |
7/23/2020 | UEGN3585.01: The FY2019 Initial Variable Construction Specifications |
7/24/2020 | GNRL3032.02: Preliminary Version of the 2018 Person Round Plan File Delivery Document - UPDATED |
7/22/2020 | GNRL3033.01: Preliminary Version of the 2018 Appendix to the Event PUFs Delivery Document, Codebooks, and Tables for Review |
7/24/2020 | GNRL4009.02 and GNRL4010.02 Delivery of End-Of-Round files (RU level and Person level) -P24R3 |
7/28/2020 | GNRL3026.02: HC-206a: Delivery of the 2018 Prescribed Medicines (PMED) PUF and all Related Files for Web Release – Updated |
7/28/2020 | GNRL3030.02: Final Versions of the Codebook and Document for the FY 2018 Consolidated Data PUF for Use in AHRQ and NCHS Review |
7/28/2020 | GNRL3032.03: Final Versions of the 2018 Person Round Plan File Codebook and Delivery Document |
7/28/2020 | GNRL3033.02: Final Version of the 2018 Appendix to the Event PUFs Delivery Document, Codebooks, and Tables for Review |
7/30/2020 | UEGN3586.01: Delivery of the EVNT Table for Panel 24 Round 1-3 MRD Data |
8/3/2020 | UEGN3587.01: The DN Text Strings Recoding for FY2019 |
8/6/2020 | DOCM0682.01: File of Provider Names for FY 2019 |
8/10/2020 | CODE1245.01: MEPS Delivery of the ICD-10-CM/CCSR Crosswalk and COND Coding Uncodeable Text Strings for FY19 |
8/10/2020 | COND0985.01: 2019 Preliminary Conditions File Specifications |
8/14/2020 | GNRL3035.01: HC-209: Full Year 2018 Consolidated Use, Expense, and Insurance PUF Delivery for Web Release |
8/14/2020 | GNRL3036.01: HC-206I: Delivery of the Final Appendix to the 2018 Event Files and all Related Files for Web Release |
8/14/2020 | GNRL3037.01: HC-208: Delivery of the 2018 Person Round Plan (PRPL) PUF and Related Files for Web Release |
8/14/2020 | GNRL3038.01: HC-207: Delivery of the Final 2018 Conditions File and All Related Files for Web Release |
8/17/2020 | ACCS0192.01: 2019 ACCS Other Specify Text String Recoding |
8/18/2020 | CODE1246.01: MEPS Delivery of PMED Proposed Updates to Inconsistencies Found in the Authority Table and EXACT/INXCT Matching Output |
8/20/2020 | GNRL4009.03 and GNRL4010.03: Delivery of End-Of-Round files (RU level and Person level) - P25R1 |
8/21/2020 | UEGN3586.02: Delivery of the EVNT Table for Panel 25 Round 1 SRD Data |
8/24/2020 | COND0985.05: 2019 Preliminary Conditions File Specifications |
8/26/2020 | COND0985.08: 2019 Preliminary Conditions File Specifications |
8/27/2020 | UEGN 2820.01: 2019 Review of copayment thresholds |
8/28/2020 | COND0985.10: 2019 Preliminary Conditions File Specifications |
9/1/2020 | DOCM0683.01: MEPS – 2019 Conditions Authority File After the 2019 HC Condition Coding |
9/1/2020 | UEGN3588.01: Specifications for the 2019 Pre-Imputation UEGN Files |
9/2/2020 | GNRL4009.04: Delivery of Version 2 of the RU-level End-Of-Round (EOR) File – Panel 24 Round 3 |
9/4/2020 | GNRL3020.02: HC-206b, HC-206c, and HC-206h: 2018 Expenditure Event PUFs for Non-MPC Event Types (DV, OM, and HH) and All Related Files for Web Release – UPDATED |
9/9/2020 | COND0986.01: Multiple Duplicate CLNKS in Panel 2319 |
9/9/2020 | UEGN 2828.01: 2019 Specs for MPC rolling event Edits |
9/9/2020 | UEGN 2830.01: 2019 Specs for HHA rolling event Edits |
9/11/2020 | WGTS1982.01: Derivation of the 2018 Full Year Expenditure Family Weight, MEPS and CPS-Like, for Panel 22 and Panel 23 Combined |
9/11/2020 | WGTS1978.01: MEPS, Combined Panel 23/Round 3 and Panel 24/Round 1, Computation of the Composite Family Weights |
9/11/2020 | WGTS1979.01: Creation of the Delivery Files for the 2019 PIT P23R3/P24R1: PUF and Internal Files. These files cover the person- and family-level weights, location, and variance estimation |
9/14/2020 | COND0986.07: Multiple Duplicate CLNKS in Panel 2319 |
9/15/2020 | EMPL2221.01: FY2019 JOBS File Specifications for Approval |
9/15/2020 | UEGN3589.01: Delivery of the Specification for the 2019 Utilization Count Variables Construction |
9/15/2020 | UEGN3588.02: Updated specifications for the 2019 Pre-Imputation UEGN Files (ER, HHA, HHP, HS, MVE, MVN, and OP) |
9/16/2020 | CODE0922.01: Delivery of NAICS and SOCS Files Requested by AHRQ |
9/16/2020 | UEPD1216.01: Delivery of 2019 PMED Pre-imp files spec |
9/17/2020 | UEPD1216.04: Delivery of 2019 PMED Pre-imp files spec |
9/22/2020 | PRPL0147.01: Full Year 2019 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing |
9/25/2020 | EMPL2221.03: FY2019 JOBS File Specifications for Approval |
9/30/2020 | PRPL0147.07: Full Year 2019 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing |
10/2/2020 | CODE1247.01: MEPS 2019 Delivery of PMED Final Reports for Uncodeable, Compounds, Foreign Meds, No-MDDB, Drug Groupings |
10/2/2020 | DOCM0684.01: Delivery of 2019 Static Tables for SOP After the 2019 HC SOP Coding |
10/5/2020 | PRPL0147.02: Full Year 2019 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing |
10/5/2020 | UEGN 2831.01: 2019 Specs for flagging elderly person with private insurance from current job |
10/5/2020 | UEGN 2817.01: 2019 Specs to re-allocate imputed expenditures equally distributed among two or more sources |
10/6/2020 | GNRL3039.01: Plan for Incorporating P23R6 and R7 Data into FY2020 Constructed Variables |
10/7/2020 | DOCM068402: Redelivery of 2019 Static Tables for SOP After the 2019 HC SOP Coding |
10/7/2020 | WGTS1983.01: Raking Panels 22 and 23 (Panel 22/rounds 3-5 and Panel 23/rounds 1-3) Separately for the Individual Panel Full Year 2018 Person-Level Weights Including the Poverty Status |
10/7/2020 | WGTS1984.01: Delivery Files for the FY 2018 Individual Panel Expenditure Person-Level Weights, Panel 22 and Panel 23 |
10/7/2020 | WGTS1971.01: Updating Master Variance File Strata and PSUs for Panel 24, Round 1 |
10/7/2020 | WGTS1972.01: MEPS: Establishing Variance Estimation Strata and PSUs for the 2019 Point-in-Time PUF, Panel 24, Round 1 and Panel 23, Round 3 |
10/7/2020 | WGTS1973.01: Final: Estimating Standard Errors Using SUDAAN for the Panel 24, Round 1 and Panel 23, Round 3 PIT 2019 PUF Data—Checking the Variance Strata and PSUs |
10/8/2020 | WGTS1989.02: Developing Sample Weights for the MEPS Veteran Self-Administered Questionnaire (VSAQ) Component for the Full Year 2018 Consolidated (Expenditure) Public Use File |
10/8/2020 | PRPL0147.13: Full Year 2019 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing |
10/8/2020 | PRPL0147.15: Full Year 2019 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing |
10/9/2020 | EMPL2221.06: FY2019 JOBS File Specifications for Approval |
10/13/2020 | WGTS5019.01: Delivery of the ADMN/DEMO Variables Used for Weights Development for P23P24FY19 |
10/14/2020 | EMPL2222.01: FY2019 Panel 24 Editing of High Wage Outliers or Substantially Different Wages – Request for Approval |
10/14/2020 | EMPL2223.01: FY2019 Panel 24 Editing of Low Wage Outliers or Wages that Do Not Change – Request for Approval |
10/14/2020 | HINS1318.01: P2419 EPCP QC crosstabs |
10/16/2020 | CODE0922.01: Delivery of NAICS and SOCS Files Requested by AHRQ |
10/16/2020 | EMPL2221.07: FY2019 JOBS File Specifications for Approval |
10/16/2020 | DOCM0686.01: Delivery of 2019 Static Tables for SRCS After the 2019 HC SRCS Coding |
10/16/2020 | HINS1319.01: P2319 EPCP QC crosstabs |
10/19/2020 | DOCM0685.01: Delivery of the 2019 MPC Pre-Matching Household Component Production File |
10/19/2020 | EMPL2221.09: FY2019 JOBS File Specifications for Approval |
10/20/2020 | EMPL2223.02: FY2019 Panel 24 Editing of Low Wage Outliers or Wages that Do Not Change – Request for Approval |
10/20/2020 | WGTS1992.01: March 2020 CPS and December 2019 control totals output, digital delivery |
10/21/2020 | EMPL2221.14: FY2019 JOBS File Specifications for Approval |
10/22/2020 | PRPL0147.29: Full Year 2019 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing |
10/24/2020 | HINS1319.04: P2319 EPCP QC crosstabs |
10/29/2020 | CODE1248.01: MEPS 2019 Delivery of Authority File after PMED Coding and Files for Matching Programs |
10/30/2020 | CODE0924.01: Delivery of 2019 Static Table for WHOBILL - After the 2019 HC WHOBILL Coding |
10/30/2020 | COND0987.01: Applying FY18 Masking Rules to FY16 and FY17 Preliminary Conditions Data |
10/30/2020 | DOCM0687.01: Delivery of Person-Level Base and Family Pseudo Weight for FY19 |
10/30/2020 | WGTS5020.01: Delivery of Person-Level Base Weight, Individual Panel Base Weight, Family Membership Flag, and MSA variables for FY19 |
11/3/2020 | EMPL2224.01: FY 2019 Wage Imputation Specification – Review and Approval Requested |
11/4/2020 | ADMN0918.01: Weighted Crosstabs delivery of ADMN and DEMO variables |
11/4/2020 | EMPL2225.01: FY2019 Panel 23 Editing of High Wage Outliers or Substantially Different Wages – Request for Approval |
11/4/2020 | EMPL2226.01: FY2019 Panel 23 Editing of Low Wage Outliers or Wages that Do Not Change – Request for Approval |
11/5/2020 | COND0987.04: Applying FY18 Masking Rules to FY16 and FY17 Preliminary Conditions Data |
11/5/2020 | WGTS1993.01: March 2020 CPS and revised December 2019 control totals output, digital delivery |
11/9/2020 | CODE0925.01: MEPS - Updated PMED Matching Programs |
11/9/2020 | HINS1322.01: Results of the QC Cross Tabs for the HINS 2019 HMO/Gatekeeper FY variables |
11/13/2020 | DSDY0061.01: FY 2020 Disability Days Variable Planning |
11/13/2020 | EMPL2229.01: FY2019 Wage Imputation Failures |
11/23/2020 | EMPL2230.04: FY 2019 Hourly Wage Imputation Output for Approval |
11/23/2020 | PRPL0148.01: FY19 PRPL Specifications Coverage Record and HMO Variables, JOBS Link and Variable Editing, and Variable Editing: Post JOBS Linking |
11/23/2020 | UEGN3591.01: Deliver to AHRQ for approval specifications for the non-MPC (DN, OM, and HH) Expenditure Event files |
11/24/2020 | DSDY0062.01: FY 2019 Disability Days Weighted Crosstabs and Frequencies for AHRQ |
11/24/2020 | DSDY0063.01: FY 2019 Delivery of the DSDY “Missed Days” top code values for AHRQ approval |
11/24/2020 | UEPD1216.02: 2019 (Panel 23 & 24) Household Prescribed Medicine and Associated Files - Set 1 |
11/27/2020 | EMPL2232.01: Full Year 2019 Wage Top Code Value for AHRQ Approval |
12/4/2020 | COND0987.02: Applying FY18 Masking Rules to FY16 and FY17 Preliminary Conditions Data – Redelivery |
12/4/2020 | EMPL2233.01: FY19 Pre-Top Coded Wage and Uncondensed IO Codes Data Delivery |
12/4/2020 | PRPL0148.12: FY19 PRPL Specifications Coverage Record and HMO Variables, JOBS Link and Variable Editing, and Variable Editing: Post JOBS Linking |
12/4/2020 | UEGN2835.01: 2019 Events with questionable information observed during pre-editing QC review |
12/7/2020 | UEGN2835.04 2019 Events with questionable information observed during pre-editing QC review |
12/7/2020 | WGTS5021.01 - Delivery of the Variance Strata and PSU Variables for FY2019 |
12/7/2020 | WGTS2000.01: Panel 23 Full Year 2019 SAQ Use person weight review output |
12/7/2020 | WGTS2001.01: Panel 24 Full Year 2019 SAQ Use person weight review output |
12/8/2020 | UEGN2836.01: 2019 Listing of cases where a simple event has the exact same charge and payment sum as the flat fee bundle |
12/8/2020 | WGTS2002.01: Full Year 2019 SAQ Use person weight for the combined Panels review output to AHRQ |
12/9/2020 | EMPL2234.01: Full Year 2019 JOBS File Establishment Size Top Code Value and Extent of JOBS Wage Top Coding for AHRQ Approval |
12/10/2020 | EMPL2234.02: Full Year 2019 JOBS File Establishment Size Top Code Value and Extent of JOBS Wage Top Coding for AHRQ Approval |
12/10/2020 | GNRL3040.01: Preliminary Version of the 2019 Full Year Use PUF Dataset |
12/10/2020 | UEGN 2816.01: 2019 Specs for processing flat fee bundles |
12/10/2020 | WGTS1985.01: P22FY2018 Person-level SAQ Expenditure Weights |
12/11/2020 | UEGN3592.01: Delivery of the FY19 Pre-Imputation files |
12/11/2020 | WGTS2004.01: Full Year 2019 combined Panels person weight review output |
12/11/2020 | WGTS1986.01: P23FY2018 Person-level SAQ Expenditure Weights |
12/11/2020 | WGTS1988.01: Developing Sample Weights for the MEPS Diabetes Questionnaire Component (DCS) for the Panels 22 and 23 Full Year 2018 Expenditure File (PUF) |
12/14/2020 | DOCM0689.01: 2020 MPC sample file specs |
12/14/2020 | DOCM0690.01: 2020 PC sample file specs |
12/14/2020 | DOCM0691.01: 2020 provider file for NPI coding specs |
12/15/2020 | WGTS1996.01: Derivation of the Annualized MEPS Families and Identification of the Responding MEPS Families for MEPS Panel 24 Full Year 2019 |
12/15/2020 | WGTS2005.01: MEPS: Establishing Variance Estimation Strata and PSUs, and Estimating Standard Errors Using SUDAAN for the Full Year 2019 PUF, Panel 23, Rounds 3-5 and Panel 24, Rounds 1-3 |
12/15/2020 | (WGTSMEMO1994.01): New Weighting Memo #1994.01_Do_Not_Email: Derivation of the MEPS Panel 23 Full Year 2019 Person Use Weights (Rounds 3-5) |
12/15/2020 | WGTS1999.01: Creating Factors to Adjust the 2019 Full Year Person Weights to Better Reflect the Number of Persons who Died or Spent Part of the Year in a Nursing Home |
12/16/2020 | GNRL4025.01 - Redelivery of the RU-level End-Of-Round (EOR) Files |
12/16/2020 | WGTS1991.01_Do_Not_Email: Derivation of the MEPS Panel 24 Full Year 2019 Person Use Weights (Rounds 1-3) |
12/17/2020 | EMPL2235.01: Full Year 2019 Wage Top Coding Results |
12/17/2020 | UEGN 2818.01: 2019 Specifications for Last Step Edits |
12/17/2020 | UEPD1216.03: 2019 (Panel 23 & 24) PMED Supplemental File - Set 2: Person-Level File and Additional 3 Segment Variable Files |
12/18/2020 | WGTS2000.01: Developing Panel 23 Self-Administered Questionnaire (SAQ) Use Weights for Full Year 2019 |
12/18/2020 | WGTS2001.01: Developing Panel 24 Self-Administered Questionnaire (SAQ) Use Weights for Full Year 2019 |
12/21/2020 | DEMO1017.01: Delivery of the Output Listings for Case Review of the MOPID and DAPID Variables’ Construction for FY2019 |
12/21/2020 | WGTS2002.01: Developing Sample Weights for the MEPS Self-Administered Questionnaire (SAQ) for the Panels 23 and 24 Full Year 2019 Use File (PUF), and Creating the Full Year 2019 Person Use SAQ Weights Delivery File |
12/21/2020 | WGTS5022.01: Delivery of the SAQ Use PUF Weight and Individual Panel SAQ Weight Variables for FY2019 |
12/21/2020 | WGTS5023.01: Delivery of Person-Level Use PUF Weight, Single Panel Person Weight, and MSA19_13 Variables for FY19 |
12/22/2020 | UEGN35900.1: Delivery of the 2018 Post-Imputation Files for the MEPS Master Files |
12/22/2020 | WGTS2004.01: MEPS Panels 23 and 24 Full Year 2019: Combine and Rake the P23 and P24 Weights to Obtain the P23P24FY19 Person-Level USE Weights |
12/22/2020 | WGTS2006.01: Create the P23P24 Full Year 2019 Person Use Weight and Individual Panel Weights Delivery File |
12/22/2020 | WGTS1997.01: Creation of CPS Control Total Files Containing the Raking Dimensions for the Full Year 2019 USE Person Weights |
12/22/2020 | WGTS2003.01: Creation of CPS Control Total Files Containing the Raking Dimensions for the Full Year 2019 Self-Administered Questionnaire (SAQ) Use Person Weight |
12/24/2020 | INCO0754.01: Delivery of the 2019 (Panel 23 & 24) Income File |
12/24/2020 | UEPD1216.04: 2019 (Panel 23 & 24) PMED Supplemental File - set 3: Person/Round-Level Files |
12/28/2020 | PRPL0150.01: Output and Frequencies from 2019 PRPL Test Program #1 and Program #2 |
12/28/2020 | UEGN3594.01: Deliver to AHRQ for approval specifications for the MPC (OB, OP, ER, and IP) Expenditure Event files |
12/29/2020 | COND0988.01: FY 2019 Specifications for the CLNK and RXLK PUFs |
12/29/2020 | GNRL4026.01 and GNRL4027.01: Delivery of End-Of-Round files (Person-Level and RU Level) -P23R6 |
12/29/2020 | GNRL4028.01: Delivery of the EVNT, PMED and EPCP Tables for the Panel 23 Round 6 SRD Data |