Chair: Marina Signore
Room: S2 Wawel
Time: 08:30 - 10:00
Date: 29 June
|Title: <<< A Future Focused Approach to National Statistical Quality Reviews in the UK >>>
The production of official and national statistics requires a programme of continuous improvement and periodic reviews to ensure that the outputs provided remain helpful, trustworthy, and represent value for money. The Office for National Statistics in the UK is developing and implementing a new approach to National Statistics Quality Reviews (NSQRs) aiming to monitor and improve the quality of statistical outputs in–line with data revolution and changing user demands. The new NSQRs are future focused, cover themes of national importance and are conducted by multi-disciplinary teams across the Government Statistical Service (GSS), academia and the user community. Each NSQR will incorporate two phases, an initial phase to cover activities of the review itself and the formulation of recommendations and, a subsequent phase to follow up arrangements for recommended actions and post implementation evaluation. The fist NSQR, in this new format, is a review of the privacy and data confidentiality methods aiming to highlight areas that require development and investment to help producers of official and national statistics to prepare for the future. This review is carried in the context of new legislation being introduced in the UK to allow statistical producers across GSS to use non-survey data for the purpose of producing more detailed, frequent and flexible statistical outputs. The use of richer data sources poses new challenges in ensuring that privacy of individuals and confidentiality of the data is protected at all times. This presentation will discuss both the challenges and the benefits of implementing the new approach toquality reviews.
|Title: <<< An empirical study of the effect of high, low or moderate nonresponse rate on the quality of survey estimates >>>
Response rates have decreased steadily in household surveys across many countries, despite the increasing effort and resource being spent to deal with the problem. The associated nonresponse errors have received considerable attention in the past decades, as they can cause bias and are critical to the accuracy of survey based statistics. Some authors have published papers claiming that estimates of sufficient quality can be produced based on response rates as low as 5 percent (e.g Hellevik 2015), while practitioners and users of survey data often argue that “high” response rates are necessary for god quality. In this empirical study we focus on the Norwegian Election Survey data over nearly 50 years, combined with linked administrative sources, which provide additional data of Electoral Turnout and population demographic characteristics. The nonresponse rate has increased steadily from 10 percent in 1969 to 45 percent in 2017. But how has nonresponse bias evolved over the same period? Can another indicator better capture the nonresponse bias, such as the R-indicator that has received much attention in the recent years? Is it possible to devise other simple but more useful bias indicators? We take a closer look at the absolute deviation of the response rates as an alternative to the squared deviation used in the R-indicator. How would the nonresponse bias evolve, based on extrapolation of the trend of nonresponse rate and other factors that determine the nonresponse bias? What would be the likely nonresponse bias if the nonresponse rate either becomes very high or very low? Is there an “acceptable’ range of nonresponse rate in practical surveys? These are some of the questions we investigate based on the historical material at our disposal.
|Title: <<< Measuring, reporting and communicating quality of National Accounts statistics (ESA 2010) in an integrated way with data production >>>
The Regulation (EU) No 549/2013 sets up the European System of Accounts 2010 (ESA 2010). Article 4 of this Regulation requires Member States to provide to Eurostat quality reports which would allow the Commission to assess the quality of data received under ESA 2010 transmissions. The Commission Implementing Regulation (EU) 2016/2304 lays down the modalities, structure, periodicity and assessment indicators of the quality reports. Under the above mentioned regulations Eurostat in collaboration with the Member States has set up quality reports for National Accounts. To set up a business process, specific challenges needed to be addressed. Statistical production processes routinely generate large amounts of information of potential interest for quality reporting. Quality reporting has however traditionally been carried out as a parallel and separate activity from data production, and has therefore often not made full use of the wealth of information created by production processes or stored in production databases. The paper, firstly, introduces the framework in which the annual quality reports are produced. In the second part the quality measures are presented. The third part shows Eurostat’s efforts to automatically integrate information coming from data production into the quality reporting process. The fourth part focuses on the implementation and the communication of the country reports and the Commission's report to the European Parliament on the quality of National Accounts. Though the quality reports cover the full transmission programme under the ESA 2010, examples in this paper will concentrate more on the domain of main aggregates and supply, use, input-output tables. Finally, the paper highlights the challenges encountered and discusses future avenues of work to achieve more efficient ways of quality reporting.
|Title: <<< Quality Management for Official Statistics: Some Lessons Learned after Seven Years of ASPIRE >>>
ASPIRE (A System for Product Improvement, Review and Evaluation) provides a general framework for improving the quality of statistical programs for National Statistical Offices (NSOs) and other organizations that provide a continual flow of statistical products to users and stakeholders. ASPIRE was first implemented at Statistics Sweden in 2011 in response to a mandate from the Swedish Ministry of Finance to develop a system of quality indicators for tracking developments and changes in product quality and for achieving continual improvements in data quality across a diverse set of key statistical products. In this presentation, we provide an overview of ASPIRE, demonstrate its application to several statistical products (such as, for example, the Swedish Labour Force Survey, Consumer Price Index, and Structural Business Statistics) and summarize some of the key lessons learned after having applied the system annually for the past seven years. We will present some results from an organization-wide evaluation of ASPIRE (at Statistics Sweden) which has led to some important modifications of the ASPIRE process going forward. The presentation will also discuss the implications of these changes to ASPIRE for monitoring and evaluating product quality at Statistics Sweden as well as in statistical organizations world-wide.
|Title: <<< Automatically Generated Quality Control Tables and Quality Improvement Programs >>>
The Economic Directorate of the U.S. Census Bureau collects various economic data with a requirement to accurately capture, process, and analyze the data to guarantee our processes identify and correct quality issues. Effective quality control systems are the foundation for successful data collection along with well-defined program requirements that assure compliance with Office of Management and Budget and Census Bureau quality standards. To meet these objectives, our primary goal is to build an automated quality control and quality assurance system that will identify and implement analytical methodologies that reduce the introduction of error into analytical data. For data collection and data evaluation, we intend to build a system that ensures that all surveys conducted by the Economic Directorate produce the quality results expected for the intended use of the data. In this paper, we discuss requirements for a set of automatically generated tables and applications that should be used to monitor various processes from the survey planning stage through product dissemination. In the remainder of the paper, we discuss the automated quality assurance checks that should be made in each survey phase to ensure that decisions will be supported by data of adequate quality and usability for their intended purpose, and further ensure that such data are authentic, appropriately documented, technically defensible, and statistically sound. Quality issues and resolutions at each stage of the survey will be discussed. Likewise, resource challenges and possible solutions will also be addressed.