Session 10

Back to Schedule

Title of session: Quality as a multi-dimensional concept

Chair: Monika Bieniek

Room: S2 Wawel

Time: 10:30 - 12:00

Date: 29 June


Session 10 - papers & presentations


Presenting AuthorAbstract
Pertti Taskinen
e-mail: pertti.taskinen@stat.fi
Title: <<< Quick statistics - how to deal with quality? >>>
Labour Force Survey needs a reservoir of information which is collected directly from the interviewees. Telephone interviews are often quick and even hasty. In this presentation, the object is to tell what kind of quality checks are made to ensure the validity of the data which forms the official monthly unemployment rate - a quick and an important statistics. Mostly, checks are mode independent, but the increasing use of the web data collection mode will be taken account in this presentation. During the data collection stage, the best guarantee of the quality is a trained interviewer who uses the data collection software with certain question-specific rules or instructions. The instructions of the interview should be so simple as possible and to be adapted as commonly as possible. For the Labour Force Survey, it is also recommendable to use the data from the previous round to the next one. This makes an interview easier for both sides - an interviewer and interviewee - without to endanger the quality. At the end of each survey month, the data production model uses three separate programmed checks: one for the collected data, one for the domestic variables and one for the EU variables. The software at the data production has been made user-frendly. If there are no unexpected problems e.g. with databases, the software program is possible to execute over a day. Basically, the figures are then ready but during the dissemination step some working days are needed to draft the publication and translations, and to update databases. In this presentation, the data production model of the Finnish Labour Force Survey is also examined: could we do something even quicker and possibly in a little less resource demanding way with the existing quality? And eventually, what is the best practice to recommend?
Robert Obrzut
e-mail: robert.obrzut@ec.europa.eu
Title: <<< The impact of the consistency debate on increased accuracy and coordination within the national accounts and between the national accounts and balance of payments statistics of the EU >>>
The quality assurance framework of the European Statistical System suggeststhe critical assessment of data sources, statistical techniques and revision practices as well as the assessment and validation of intermediate data and statistical outputs. They point at underlying statistical compilation processes that involve the use of primary data sources and statistics in order to obtain a finalised statistical product. As a prominent example in macroeconomic statistics, the national accounts and balance of payments statistics complement each other in such a manner. This sequential concept, however, hardly reflects the realities of statistical compilation practices, where statistical products are often released in parallel, playing a role both as data source and final statistical product. It is argued that statistical compilation rather has to be perceived as a twinning process which is usually conducted by more than one compiling institution and obliges compilers to introduce only data sources and estimation practices which they can directly control for the sake of timeliness of their statistical products. In such situations a critical assessment as demanded by the international framework is conducted by each compiler autonomously. In the EU this has resulted in about half of its Member States still releasing national accounts and balance of payments statistics with either high or moderate inconsistencies due to the autonomous use of data sources and compilation practices, although the methodological standards would require full consistency. Different deadlines in the transmission program and limited resources of national statistical institutes can lead to inconsistencies across tables also within the same domain (national accounts), even when the compilation is carried out by the same institution.This paper will combine data evidence about such inconsistencies and concludes that there is a distinct need for increased coordination of the underlying compilation processes both at national and international level.
Frantisek Hajnovic
e-mail: frantisek.hajnovic@ons.gov.uk
Title: <<< Measuring the quality of commercial and big data sources for official statistics. >>>
There has been an increasing shift towards using data science techniques and accessing alternative big data sources across all sectors of society. The UK government has been exploring the use of big data within official statistics. These are neither administrative data, nor existing government data, and could include; web scraped data, big data from commercial companies and social media data. The Office for National Statistics (ONS) is exploring the use of these alternative data sources as part of its "Better Statistics, Better Decisions" strategy. Using these data sources and integrating them into official statistics presents significant methodological challenges, including bias, variety of data (such as text and images) and that there is no control over data supply. As an official statistics producer, ONS is committed to ensure that any statistics derived from this data meet user needs and are of high quality standards. As an emerging field, there is little guidance on the measurement of quality within big data and data science applications for official statistics. Nevertheless, there are existing dimensions of quality that offer a good framework for using data in an official capacity. Derived from the European Statistical System dimensions of quality, the UK Government Statistical Service’s (GSS) eight quality dimensions for all published statistics can be applied to big data. These include; relevance, accuracy, timeliness and coherence. The presentation will explore the quality implications of big data and data science methods through example projects of the ONS Big Data team, including use of social media data and collaborations with other National Statistical Institutes. Through these, we will establish how quality could and has been measured using the eight dimensions, challenges faced and ideas for applying quality measures to future big data sources.
Kuniko Moriya
e-mail: kuniko.moriya@boj.or.jp
Title: <<< Further Challenges in Quality Management of Statistics at the Bank of Japan >>>
The Bank of Japan compiles various economic statistics, which draw a great deal of attention from media, economists, and policy makers. These statistics include financial statistics, price indexes, and the Tankan (a short-term economic survey of enterprises in Japan). As the central bank of Japan, the Bank faces two challenges in compiling statistics: it relies solely on the voluntary cooperation of data contributors when collecting the data, while it also needs to meet high expectations from the public for good quality management of its statistics under high labor constraints and budgetary limitations. The Bank ensures the quality of its statistics by making them comply with “The Basic Principles for the Compilation, Release, and Development of Statistics,” published by the Bank in 2009. The Bank also makes continuous efforts to further improve the quality management of statistics. This paper considers recent challenges facing the Tankan, and the Bank's efforts toward enhancing data quality -- including visualization and dissemination -- in order to engage users and meet their demands in a cost-efficient and responsive manner. Specifically, the paper explains the motivation behind these efforts, the process of study, and the desired results. The paper also considers quality management frameworks such as the European Statistics Code of Practice developed by Eurostat, and issues related to the European Statistical System’s Vision 2020.
Rogelio Pujol Rodriguez
e-mail: rogelio.pujol.rodriguez@ine.es
Title: <<< Cognitive interviewing in the Disability Pilot Survey in Spain >>>
Cognitive interviewing is a technique that helps ensure that a survey questions successfully capture the scientific aims by identifying potential measurement error and question wording problems (overall understanding and interpretation of the question, convenience of the examples given, temporary references, type of scoring at an item level, etc.). This method has proven to increase the quality of the collected data. Hence we conducted several cognitive interviews on two questionnaire models prior to the fieldwork of the Disability Pilot Survey. The aim was to analyze the comprehension and quality of the questions and correctly collect those homes where people with disabilities live, and their limitations. For this purpose voluntary subjects with specific characteristics of interest were recruited and interviewed in a laboratory environment. Some of them were asked to respond to a previously assigned questionnaire model via web (CAWI: Computer-assisted web interviewing) while others were telephoned (CATI: Computer-assisted telephone interviewing). All of them were asked afterwards about their responses and feelings while answering the questionnaire. As a result of these interviews, most of the participants preferred the second model questionnaire (the one with multiple response format), several questions changed their wording (the known GALI question was rephrased) and we confirmed the importance of the examples given at the end of each one of the specific disability question.

Back to Schedule

Font Resize
Contrast