Painting authorship and forgery detection challenges with ai image generation algorithmsRembrandt and 17th century dutch painters as a case study

  1. Marcelo Fraile-Narváez 1
  2. Ismael Sagredo-Olivenza 1
  3. Nadia McGowan 1
  1. 1 Universidad Internacional de La Rioja
    info

    Universidad Internacional de La Rioja

    Logroño, España

    ROR https://ror.org/029gnnp81

Revue:
IJIMAI

ISSN: 1989-1660

Année de publication: 2022

Volumen: 7

Número: 7

Pages: 7-13

Type: Article

DOI: 10.9781/IJIMAI.2022.11.005 DIALNET GOOGLE SCHOLAR lock_openDialnet editor

D'autres publications dans: IJIMAI

Résumé

Image authorship attribution presents many challenges and difficulties which have increased with the capabilities presented by synthetic image generation through different artificial intelligence algorithms available today. The hypothesis in this research considers the possibility of using artificial intelligence as a tool to detect forgeries through the usage of a deep learning algorithm. The proposed algorithm was trained using a dataset comprised of paintings by Rembrandt and other 17th century Dutch painters. Three experiments were performed with the proposed algorithm. The first was to build a classifier able to ascertain whether a painting belongs to the Rembrandt or non-Rembrandt category, depending on whether it was painted by this author or not. The second tests included other 17th century painters in four categories. Artworks could be classified as Rembrandt, Eeckhout, Leveck or other Dutch painters. The third experiment used paintings generated by Dall-e 2 and attempted to classify them using the prior categories. Experiments confirmed the hypothesis with best executions reaching accuracy rates of more than 90%. Future research with extended datasets and improved image resolution are suggested to improve the obtained results.

Références bibliographiques

  • A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, M. Chen, “Hierarchical textconditional image generation with clip latents,” 2022, doi: 10.48550/ arXiv.2204.06125.
  • A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., “Learning transferable visual models from natural language supervision,” in International Conference on Machine Learning, 2021, pp. 8748–8763, PMLR.
  • Midjourney, https://www.midjourney.com/.
  • C. Saharia, W. Chan, S. Saxena, L. Li, J. Whang, E. Denton, S. K. S. Ghasemipour, B. K. Ayan, S. S. Mahdavi, R. G. Lopes, et al., “Photorealistic text-to-image diffusion models with deep language understanding,” arXiv preprint arXiv:2205.11487, 2022.
  • A. Cullins, “Star Wars’ and the legal issues of dead but in-demand actors,” https://www.hollywoodreporter.com/movies/movie-news/carrie-fisherstar-wars-legal-issues-dead-but-demand-actors-997335/, 2017.
  • C. Santana , https://twitter.com/DotCSV/status /1544959141004861441, 7 July 2022.
  • B. Edwards, “Flooded with AI-generated images, some art communities ban them completely,” https://arstechnica.com/informationtechnology/2022/09/flooded-with-ai-generated-images-some-artcommunities-ban-them-completely/, 12 September 2022.
  • Christies, https://www.christies.com/features/A-collaboration-betweentwo-artists-one-human-one-a-machine-9332-1.aspx, 12 December, 2018.
  • S. Lyu, “Deepfake detection: Current challenges and next steps,” in 2020 IEEE international conference on multimedia & expo workshops (ICMEW), 2020, pp. 1–6, IEEE.
  • B. Chesney, D. Citron, “Deep fakes: A looming challenge for privacy, democracy, and national security,” California L. Rev., vol. 107, p. 1753, 2019.
  • R. Delfino, “Pornographic deepfakes—revenge porn’s next tragic act–the case for federal criminalization,” 88 Fordham L. Rev., vol. 887, 2019.
  • H. B. Dixon Jr, “Deepfakes: More frightening than photoshop on steroids,” Judges J., vol. 58, p. 35, 2019.
  • S. Feldstein, “How Artificial Intelligence Systems Could Threaten Democracy,” The Conversation, 2019.
  • P. Rey-García, N. McGowan, La amenaza híbrida: la guerra imprevisible, ch. El deepfake como amenaza comunicativa: diagnóstico, técnica y prevención. Madrid, Spain: Ministerio de Defensa, 2020.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
  • S. J. Frank, “This ai can spot an art forgery.” https://spectrum.ieee. org/this-ai-can-spot-an-art-forgery, 23 AUG 2021.
  • S. J. Frank, A. M. Frank, “Rembrandts and robots: Using neural networks to explore authorship in painting,” arXiv preprint arXiv:2002.05107, 2020.
  • W. Zhao, D. Zhou, X. Qiu, W. Jiang, “Compare the performance of the models in art classification,” Plos one, vol. 16, no. 3, p. e0248414, 2021. https://doi.org/10.1371/journal.pone.0248414
  • S. J. Frank, A. M. Frank, “Salient slices: Improved neural network training and performance with image entropy,” Neural Computation, vol. 32, no. 6, pp. 1222–1237, 2020. https://doi.org/10.1162/neco_a_01282
  • Y. Wu, Q. Wu, N. Dey, S. Sherratt, “Learning Models for Semantic Classification of Insufficient Plantar Pressure Images,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 6, no. 1, pp. 51-61, 2020. https://doi.org/10.9781/ijimai.2020.02.005
  • F. Xu, T. Wu, S. Huang, K. Han, W. Lin, S. Wu, S. CB, S. R. Dinesh Jackson, “Extensive Classification of Visual Art Paintings for Enhancing Education System using Hybrid SVM-ANN with Sparse Metric Learning based on Kernel Regression,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 7, no. 2, pp. 224-231, 2021. https://doi. org/10.9781/ijimai.2021.10.001
  • A. Blessing, & K. Wen, Using machine learning for identification of art paintings. Technical report. 2010.
  • A. Oliva, A. Torralba, “Modeling the shape of the scene: A holistic representation of the spatial envelope,” International journal of computer vision, vol. 42, no. 3, pp. 145–175, 2001.
  • N. Dalal, B. Triggs, “Histograms of oriented gradients for human detection,” in 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol. 1, 2005, pp. 886–893, IEEE.
  • S. Lazebnik, C. Schmid, J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” in 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), vol. 2, 2006, pp. 2169–2178, IEEE.
  • M. J. G. Narag, M. N. Soriano, “Identifying the painter using texture features and machine learning algorithms,” in Proceedings of the 3rd International Conference on Cryptography, Security and Privacy, 2019, pp. 201–205.
  • M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, L.C. Chen, “Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation,” arXiv preprint https://arxiv.org/ abs/1801.04381v2, 2018.
  • L.-C. Chen, G. Papandreou, F. Schroff, H. Adam, “Rethinking atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017.
  • M. Holschneider, R. Kronland-Martinet, J. Morlet, P. Tchamitchian, “A real-time algorithm for signal analysis with the help of the wavelet transform,” in Wavelets, Springer, 1990, pp. 286–297.
  • J. Dai, Y. Li, K. He, J. Sun, R. Fcn, “Object detection via region-based fully convolutional networks,” arXiv preprint arXiv:1605.06409, 2016.
  • K. He, X. Zhang, S. Ren, J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  • L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834– 848, 2017.
  • S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, P. Frossard, “Universal adversarial perturbations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1765–1773, 2017.