An Experimental Approach to Signature Generation Using Generative Adversarial Networks
Keywords:
Signature Generation, Deep Learning, Generative Adversarial NetworksAbstract
Background and Objectives: Signature plays a crucial role in identity verification, as it serves as a unique representation of an individual. Traditionally, signature design relies on expert guidance from a skilled designer. Meanwhile, artificial intelligence (AI) has currently been predominantly utilized in signature verification and handwritten text generation. The present study therefore aimed to explore the generation of signatures from English names using Generative Adversarial Networks (GANs).
Methodology: The present research employed IAM Handwriting dataset. The dataset was processed through a deep learning framework utilizing ScrabbleGAN. The generated signatures were then compared with those produced by models based on Long Short-Term Memory (LSTM) and Transformer architectures. The evaluation was conducted through human assessment to determine the realism and quality of the generated signatures.
Main Results: The experimental results indicate that ScrabbleGAN was capable of generating relatively realistic signatures. However, it struggled with background removal, which affected the overall quality of the generated outputs. When compared ScrabbleGAN to LSTM and Transformer models, these latter approaches demonstrated superior performance in eliminating background noise. Additionally, the signatures generated by ScrabbleGAN were found to be less visually convincing than those produced by LSTM-based models.
Conclusions: While ScrabbleGAN demonstrates potential in generating signatures from English names, its limitations in background removal and signature authenticity highlight the need for further refinements. The present study suggests that although GAN-based approaches can be utilized for signature generation, additional improvements are required to enhance the realism and consistency of the generated results.
Practical Application: The findings of the present study can contribute to the development of automated signature generation systems, which could be applied in digital document signing, personalized signature creation and AI-driven handwriting applications. Furthermore, the insights gained from the present research can serve as a foundation for improving signature synthesis models, ultimately leading to higher-quality and more reliable signature generation techniques.
References
Diaz, M., Ferrer, M.A., Impedovo, D., Malik, MI., Pirlo, G. and Plamondon, R. 2019. A perspective analysis of handwritten signature technology. ACM Computing Surveys, 51, 1-39. https://doi.org/10.1145/3274658
Fogel, S., Averbuch-Elor, H., Cohen, S., Mazor, S. and Litman, R. 2020. ScrabbleGAN: Semi-supervised varying length handwritten text generation. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4324–4333. https://doi.org/10.1109/CVPR42600.2020.00438
Graves, A. 2014. Generating sequences with recurrent neural networks. arXiv:1308.0850, 43. https://doi.org/10.48550/arXiv.1308.0850
Bhunia, K., Khan, S., Cholakkal, H., Anwer, R.M., Khan, F.S. and Shah, M. 2021. Handwriting transformers. 2021 IEEE/CVF International Conference on Computer Vision, 1066-1074. https://doi.org/10.1109/ICCV48922.2021.00112
Luhman, T. and Luhman, E. 2020. Diffusion models for handwriting generation. arXiv:2011.06704, 17. https://doi.org/10.48550/arXiv.2011.06704
Tan, B.R., Yin, F., Wu, Y.C. and Liu, C.L. 2017. Chinese handwriting generation by neural network based style transformation. Lecture Notes in Computer Science, 10666, 408–419. https://doi.org/10.1007/978-3-319-71607-7_36.
Mustapha, I.B., Hasan, S., Nabus, H. and Shamsuddin, S.M. 2022. Conditional deep convolutional generative adversarial networks for isolated handwritten Arabic character generation. Arabian Journal for Science and Engineering, 47, 1309-1320. https://doi.org/10.1007/s13369-021-05796-0
Ji, B. and Chen, T. 2020. Generative adversarial network for handwritten text. arXiv:1907.11845, 12. https://doi.org/10.48550/arXiv.1907.11845
Marti, U.V. and Bunke, H. 2002. The IAM-database: An English sentence database for offline handwriting recognition. International Journal on Document Analysis and Recognition, 5, 39-46. https://doi.org/10.1007/s100320200071
Brock, A., Donahue, J. and Simonyan, K. 2019. Large scale GAN training for high fidelity natural image synthesis. International Conference on Learning Representations, 6-9 May 2019, New Orleans, LA, USA, 1-35.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, AN., Kaiser, L. and Polosukhin, I. 2017. Attention is all you need. Conference on Neural Information Processing Systems, 4-9 December 2017, Long Beach, CA, USA, 1-11.
Rombach, R., Blattmann, A., Lorenz, D., Esser, P. and Ommer, B. 2022. High-resolution image synthesis with latent diffusion models. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18-24 June 2022, New Orleans, Louisiana, 10674-10685. https://doi.org/10.1109/CVPR52688.2022.01042.
Podell, D., English, Z., Lacey, K., Blattmann, A., Dockhorn, T., Müller, J., Penna, J. and Rombach, R. 2024. SDXL: Improving latent diffusion models for high-resolution image synthesis. International Conference on Learning Representations, 7-11 May 2024, Vienna, Austria, 13.
Zhang, J., Huang, J., Jin, S. and Lu, S. 2024. Vision-language models for vision tasks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46, 5625-5644. https://doi.org/10.1109/TPAMI.2024.3369699

Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 King Mongkut's University of Technology Thonburi

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Any form of contents contained in an article published in Science and Engineering Connect, including text, equations, formula, tables, figures and other forms of illustrations are copyrights of King Mongkut's University of Technology Thonburi. Reproduction of these contents in any format for commercial purpose requires a prior written consent of the Editor of the Journal.