Ji Shuiwang, Yuan Lei, Li Ying-Xin, Zhou Zhi-Hua, Kumar Sudhir, Ye Jieping
Center for Evolutionary Functional Genomics, The Biodesign Institute, Arizona State University, Tempe, AZ 85287.
KDD. 2009 Jun 28;2009:407-415. doi: 10.1145/1557019.1557068.
The Drosophila gene expression pattern images document the spatial and temporal dynamics of gene expression and they are valuable tools for explicating the gene functions, interaction, and networks during Drosophila embryogenesis. To provide text-based pattern searching, the images in the Berkeley Drosophila Genome Project (BDGP) study are annotated with ontology terms manually by human curators. We present a systematic approach for automating this task, because the number of images needing text descriptions is now rapidly increasing. We consider both improved feature representation and novel learning formulation to boost the annotation performance. For feature representation, we adapt the bag-of-words scheme commonly used in visual recognition problems so that the image group information in the BDGP study is retained. Moreover, images from multiple views can be integrated naturally in this representation. To reduce the quantization error caused by the bag-of-words representation, we propose an improved feature representation scheme based on the sparse learning technique. In the design of learning formulation, we propose a local regularization framework that can incorporate the correlations among terms explicitly. We further show that the resulting optimization problem admits an analytical solution. Experimental results show that the representation based on sparse learning outperforms the bag-of-words representation significantly. Results also show that incorporation of the term-term correlations improves the annotation performance consistently.
果蝇基因表达模式图像记录了基因表达的时空动态,是阐明果蝇胚胎发育过程中基因功能、相互作用和网络的宝贵工具。为了提供基于文本的模式搜索,伯克利果蝇基因组计划(BDGP)研究中的图像由人工策展人手动用本体术语进行注释。由于需要文本描述的图像数量现在正在迅速增加,我们提出了一种自动化此任务的系统方法。我们考虑改进特征表示和新颖的学习公式来提高注释性能。对于特征表示,我们采用视觉识别问题中常用的词袋方案,以便保留BDGP研究中的图像组信息。此外,来自多个视图的图像可以自然地整合到这种表示中。为了减少词袋表示引起的量化误差,我们提出了一种基于稀疏学习技术的改进特征表示方案。在学习公式的设计中,我们提出了一个局部正则化框架,该框架可以明确纳入术语之间的相关性。我们进一步表明,由此产生的优化问题允许解析解。实验结果表明,基于稀疏学习的表示明显优于词袋表示。结果还表明,纳入术语间相关性可持续提高注释性能。