It is well-known that artificial neural networks have the ability to learn based on the provisions of new data. A special case of the so-called supervised learning is a mutual learning of two neural networks. This type of learning applied to a specific networks called Tree Parity Machines (abbreviated as TPM networks) leads to achieving consistent weight vectors in both of them. Such phenomenon is called a network synchronization and can be exploited while constructing cryptographic key exchange protocol. At the beginning of the learning process both networks have initialized weights values as random. The time needed to synchronize both networks depends on their initial weights values and the input vectors which are also randomly generated at each step of learning. In this paper the relationship between the distribution, from which the initial weights of the network are drawn, and their compatibility is discussed. In order to measure the initial compatibility of the weights, the modified Euclidean metric is invoked here. Such a tool permits to determine the compatibility of the network weights’ scaling in regard to the size of the network. The proper understanding of the latter permits in turn to compare TPM networks of various sizes. This paper contains the results of the simulation and their discussion in the context of the above mentioned issue.