In past OCR research, different OCR engines are used for different printing types, i.e., machine-printed characters, handwritten characters, and decorated fonts. A recent research, however, reveals that convolutional neural networks (CNN) can realize a universal OCR, which can deal with any printing types without pre-classification into individual types. In this paper, we analyze how CNN for universal OCR manage the different printing types. More specifically, we try to find where a handwritten character of a class and a machine-printed character of the same class are 'fused' in CNN. For analysis, we use two different approaches. The first approach is statistical analysis for detecting the CNN units which are sensitive (or insensitive) to type difference. The second approach is network-based visualization of pattern distribution in each layer. Both analyses suggest the same trend that types are not fully fused in convolutional layers but the distributions of the same class from different types become closer in upper layers.