DISPAR proposed by Miikkulainen and Dyer consists of layered neural networks named FGREP modules and executes the task of parsing a story and generating a paraphrase. Patterns in input and output layers of FGREP modules are sequences of word representations, and FGREP modules communicate with each other using word representations which are updated through training. This paper makes it clear that the update equation of the FGREP module has theoretical problems, and the learning algorithm does not follow Steepest Descent Method. By revising the algorithm and making an experiment, the noun vectors converge to the same vector and this solution becomes meaningless for the task. CAN has already been proposed as the model which can evade this problem. This paper moreover describes the modified model of the original FGREP module, then shows comparisons among CAN, the modified FGREP module, and the original FGREP module. The goal of the original FGREP module is to connect several PDP module to construct an architecture for higher level cognitive tasks. At the end, this paper discusses the use of CAN and the modified FGREP module as an element module of the connections.