Steganography enables a user to hide information by embedding secret messages within other non-secret texts or pictures. Recently, research along this direction has picked a new momentum when Hayes & Danezis (NIPS 2017) used adversarial learning to generate steganographic images. In adversarial learning, two neural networks are trained to learn to communicate securely in the presence of eavesdroppers (a third neural network). Hayes–Danezis forwarded this idea to steganography where two neural networks (Bob & Charlie) learn “embed” and “extract” algorithms by exchanging images with hidden text in presence of an eavesdropping neural network (Eve). Due to non-convexity of the models in the training scheme, two different machines may not learn the same embedding and extraction model even if they train on the same set of images. We take a different approach to address this issue of “robustness” in the “decryption” process. In this paper, we introduce a third neural network (Alice) who initiates the process of learning with two neural networks (Bob & Charlie). We implement and demonstrate through experiments that it is possible for Bob & Charlie to learn the same embedding and extraction model by using a new loss function and training process.