Quinn Terence J, Livingstone Iain, Weir Alexander, Shaw Robert, Breckenridge Andrew, McAlpine Christine, Tarbert Claire M
Institute of Cardiovascular and Medical Sciences, University of Glasgow, Glasgow, United Kingdom.
Glasgow Centre for Ophthalmic Clinical Research, Gartnavel General Hospital, Glasgow, United Kingdom.
Front Neurol. 2018 Mar 28;9:146. doi: 10.3389/fneur.2018.00146. eCollection 2018.
Visual impairment affects up to 70% of stroke survivors. We designed an app (StrokeVision) to facilitate screening for common post stroke visual issues (acuity, visual fields, and visual inattention). We sought to describe the test time, feasibility, acceptability, and accuracy of our app-based digital visual assessments against (a) current methods used for bedside screening and (b) gold standard measures.
Patients were prospectively recruited from acute stroke settings. Index tests were app-based assessments of fields and inattention performed by a trained researcher. We compared against usual clinical screening practice of visual fields to confrontation, including inattention assessment (simultaneous stimuli). We also compared app to gold standard assessments of formal kinetic perimetry (Goldman or Octopus Visual Field Assessment); and pencil and paper-based tests of inattention (Albert's, Star Cancelation, and Line Bisection). Results of inattention and field tests were adjudicated by a specialist Neuro-ophthalmologist. All assessors were masked to each other's results. Participants and assessors graded acceptability using a bespoke scale that ranged from 0 (completely unacceptable) to 10 (perfect acceptability).
Of 48 stroke survivors recruited, the complete battery of index and reference tests for fields was successfully completed in 45. Similar acceptability scores were observed for app-based [assessor median score 10 (IQR: 9-10); patient 9 (IQR: 8-10)] and traditional bedside testing [assessor 10 (IQR: 9-10); patient 10 (IQR: 9-10)]. Median test time was longer for app-based testing [combined time to completion of all digital tests 420 s (IQR: 390-588)] when compared with conventional bedside testing [70 s, (IQR: 40-70)], but shorter than gold standard testing [1,260 s, (IQR: 1005-1,620)]. Compared with gold standard assessments, usual screening practice demonstrated 79% sensitivity and 82% specificity for detection of a stroke-related field defect. This compares with 79% sensitivity and 88% specificity for StrokeVision digital assessment.
StrokeVision shows promise as a screening tool for visual complications in the acute phase of stroke. The app is at least as good as usual screening and offers other functionality that may make it attractive for use in acute stroke.
视力障碍影响多达70%的中风幸存者。我们设计了一款应用程序(StrokeVision),以促进对常见的中风后视力问题(视力、视野和视觉注意力不集中)进行筛查。我们试图描述基于该应用程序的数字视觉评估相对于(a)目前用于床边筛查的方法和(b)金标准测量方法的测试时间、可行性、可接受性和准确性。
从急性中风患者中前瞻性招募患者。指标测试是由经过培训的研究人员进行的基于应用程序的视野和注意力不集中评估。我们将其与视野的常规临床筛查方法(包括注意力不集中评估,即同时刺激)进行比较。我们还将该应用程序与正式动态视野检查的金标准评估(戈德曼或章鱼视野评估)以及基于纸笔的注意力不集中测试(阿尔伯特测试、星形删除测试和直线二等分测试)进行比较。注意力不集中和视野测试的结果由一名神经眼科专家进行判定。所有评估人员均对彼此的结果保密。参与者和评估人员使用定制量表对可接受性进行评分,范围从0(完全不可接受)到10(完美可接受)。
在招募的48名中风幸存者中,45人成功完成了完整的视野指标测试和参考测试。基于应用程序的测试[评估人员中位数评分10(四分位间距:9 - 10);患者评分9(四分位间距:8 - 10)]和传统床边测试[评估人员评分10(四分位间距:9 - 10);患者评分10(四分位间距:9 - 10)]的可接受性得分相似。与传统床边测试[70秒,(四分位间距:40 - 70)]相比,基于应用程序的测试的中位测试时间更长[完成所有数字测试的总时间为420秒(四分位间距:390 - 588)],但比金标准测试[1260秒,(四分位间距:1005 - 1620)]短。与金标准评估相比,常规筛查方法在检测中风相关视野缺损方面的敏感性为79%,特异性为82%。相比之下,StrokeVision数字评估的敏感性为79%,特异性为88%。
StrokeVision有望成为中风急性期视力并发症的筛查工具。该应用程序至少与常规筛查一样好,并提供了其他功能,这可能使其在急性中风中具有吸引力。