For the task of image annotation, traditional methods based on probabilistic topic model, such as correspondence Latent Dirichlet Allocation (corrLDA) , assumes that image is a mixture of latent topics. However, this kind of models is unable to directly model correlation between topics since topic proportions of an image are generated independently. Our model, called correspondence Correlated Topic Model (corrCTM), extends Correlated Topic Model (CTM)  from natural language processing to capture topic correlation from covariance structure of more flexible model distribution. Unlike previous LDA based models, topic proportions are correlated with each other in proposed corrCTM. And the topic correlation propagates from image features to annotation words through a generative process, and finally correspondence between images and words could be generated. We derive an approximate inference and estimation algorithm based on variational method. We examine the performance of our model on two benchmark image datasets, show improved performance over corrLDA for both annotation and modeling word correlation.