Suppr超能文献

比较专家和非专家贡献的众包数据的质量。

Comparing the quality of crowdsourced data contributed by expert and non-experts.

机构信息

International Institute for Applied Systems Analysis, Ecosystem Services and Management Program, Laxenburg, Austria.

出版信息

PLoS One. 2013 Jul 31;8(7):e69958. doi: 10.1371/journal.pone.0069958. Print 2013.

Abstract

There is currently a lack of in-situ environmental data for the calibration and validation of remotely sensed products and for the development and verification of models. Crowdsourcing is increasingly being seen as one potentially powerful way of increasing the supply of in-situ data but there are a number of concerns over the subsequent use of the data, in particular over data quality. This paper examined crowdsourced data from the Geo-Wiki crowdsourcing tool for land cover validation to determine whether there were significant differences in quality between the answers provided by experts and non-experts in the domain of remote sensing and therefore the extent to which crowdsourced data describing human impact and land cover can be used in further scientific research. The results showed that there was little difference between experts and non-experts in identifying human impact although results varied by land cover while experts were better than non-experts in identifying the land cover type. This suggests the need to create training materials with more examples in those areas where difficulties in identification were encountered, and to offer some method for contributors to reflect on the information they contribute, perhaps by feeding back the evaluations of their contributed data or by making additional training materials available. Accuracies were also found to be higher when the volunteers were more consistent in their responses at a given location and when they indicated higher confidence, which suggests that these additional pieces of information could be used in the development of robust measures of quality in the future.

摘要

目前,遥感产品的校准和验证以及模型的开发和验证都缺乏原位环境数据。众包正越来越被视为增加原位数据供应的一种潜在有效方式,但人们对数据的后续使用存在一些担忧,特别是对数据质量的担忧。本文通过 Geo-Wiki 众包工具检查了众包土地覆盖验证数据,以确定在遥感领域的专家和非专家提供的答案之间是否存在质量的显著差异,以及描述人类活动和土地覆盖的众包数据在多大程度上可以用于进一步的科学研究。结果表明,在识别人为影响方面,专家和非专家之间几乎没有差异,尽管结果因土地覆盖类型而异,而在识别土地覆盖类型方面,专家比非专家更好。这表明需要在识别困难的领域创建更多示例的培训材料,并为贡献者提供一种反思他们所提供信息的方法,例如通过反馈他们贡献数据的评估或提供额外的培训材料。当志愿者在特定地点的回答更加一致且他们表示更有信心时,准确性也更高,这表明未来可能会使用这些额外的信息来开发可靠的质量度量标准。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28a1/3729953/4abbb73eb845/pone.0069958.g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验