Search by item HOME > Access full text > Search by item

JBE, vol. 26, no. 1, pp.39-60, January, 2021

DOI: https://doi.org/10.5909/JBE.2021.26.1.39

Video-to-Video Generated by Collage Technique

Hyeongrae Cho and Gooman Park

C.A E-mail: gmpark@seoultech.ac.kr

Abstract:

In the field of deep learning, there are many algorithms mainly after GAN in research related to generation, but in terms of generation, there are similarities and differences with art. If the generation in the engineering aspect is mainly to judge the presence or absence of a quantitative indicator or the correct answer and the incorrect answer, the creation in the artistic aspect creates a creation that interprets the world and human life by cross-validating and doubting the correct answer and incorrect answer from various perspectives. In this paper, the video generation ability of deep learning was interpreted from the perspective of collage and compared with the results made by the artist. The characteristic of the experiment is to compare and analyze how much GAN reproduces the result of the creator made with the collage technique and the difference between the creative part, and investigate the satisfaction level by making performance evaluation items for the reproducibility of GAN. In order to experiment on how much the creator's statement and purpose of __EXPRESSION__ were reproduced, a deep learning algorithm corresponding to the statement keyword was found and its similarity was compared. As a result of the experiment, GAN did not meet much expectations to express the collage technique. Nevertheless, the image association showed higher satisfaction than human ability, which is a positive discovery that GAN can show comparable ability to humans in terms of abstract creation.



Keyword: Collage, Generation, Video synthesis, GAN, Statement

Reference:
[1] Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, Marian Mazzone, , CAN: Creative Adversarial Networks Generating “Art” by Learning About Styles and Deviating from Style Norms, arXiv:1706.07068v1 [cs.AI] 21 Jun 2017
[2] Mario Klingemann MEMORIES OF PASSERSBY I, http://www. irobotnews.com/news/articleView.html?idxno=16731, (Accessed on November 20, 2020)
[3] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros, Image-to-Image Translation with Conditional Adversarial Networks, https://arxiv.org/pdf/1611.07004.pdf, arXiv:1611.07004v3 [cs.CV] 26 Nov 2018
[4] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, Bryan Catanzaro, High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs, NVIDIA Corporation 2 UC Berkeley, https://openaccess.thecvf.com/content_cvpr_2018/pap ers/Wang_High-Resolution_Image_Synthesis_CVPR_2018_paper.pd f, CVPR 2018
[5] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, Bryan Catanzaro, Video-to-Video Synthesis, https:// arxiv.org/pdf/1808.06601.pdf, arXiv:1808.06601v2 [cs.CV] 3 Dec 2018
[6] Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A. Efros, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, https://arxiv.org/pdf/1703.10593.pdf, arXiv:1703.10593v7 [cs.CV] 24 Aug 2020
[7] Tero Karras, Samuli Laine, Timo Aila, A Style-Based Generator Architecture for Generative Adversarial Networks, https://arxiv.org/ pdf/1812.04948.pdf, arXiv:1812.04948v3 [cs.NE] 29 Mar 2019, 2019
[8] Arun Mallya, Ting-Chun Wang, Karan Sapra, and Ming-Yu Liu, World-Consistent Video-to-Video Synthesis, https://arxiv.org/pdf/ 2007.08509.pdf, arXiv:2007.08509v1 [cs.CV] 16 Jul 2020
[9] Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, Nicu Sebe, First Order Motion Model for Image Animation, https://papers.nips.cc/paper/2019/file/31c0b36aef265d9221af80872 ceb62f9-Paper.pdf, NeurIPS 2019
[10] Manuel Ruder, Alexey Dosovitskiy, Thomas Brox, Artistic style transfer for videos, https://arxiv.org/pdf/1604.08610.pdf, arXiv:1604. 08610v2 [cs.CV] 19 Oct 2016
[11] FID, https://velog.io/@tobigs-gm1/evaluationandbias, (Accessed on January 07, 2021)
[12] Inception Score (IS), https://cyc1am3n.github.io/2020/03/01/is_fid. html, (Accessed on January 07, 2021)
[13] precision and recall, https://sumniya.tistory.com/26, (Accessed on January 07, 2021)
[14] Semi-abstract and expression method , https://m.blog.naver.com/PostVie w.nhn?blogId=noransonamu&logNo=90139891887&proxyReferer=https:%2F%2Fwww.google.com%2F, (Accessed on February 10, 2021)
[15] R-CNNs Tutorial, https://blog.lunit.io/2017/06/01/r-cnns-tutorial, (Accessed on November 19, 2020)
[16] Image pyramid, https://m.blog.naver.com/PostView.nhn?blogId=sam sjang&logNo=220508552078&proxyReferer=http:%2F%2Fwww.go ogle.com%2F, (Accessed on February 10, 2021)
[17] Multi-scale Generator, Multi-scale Discriminator, https://dopelemon. me/pix2pixhd.html, (Accessed on January 07, 2021)
[18] Google Online Questionnaire, https://www.google.com/intl/ko_kr/ forms/about/, (Accessed on January 07, 2021)
[19] Interactive Demo, https://affinelayer.com/pixsrv/, (Accessed on January 07, 2021)
[20] Yanghao Li, Naiyan Wang, Jiaying Liu, Xiaodi Hou, Demystifying Neural Style Transfer, https://arxiv.org/pdf/1701.01036.pdf, arXiv:17 01.01036v2 [cs.CV] 1 Jul 2017
[21] Photorealism, https://en.wikipedia.org/wiki/Photorealism, (Accessed on November 19, 2020)
[22] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi Twitter, Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network, https://arxiv.org/pdf/1609.04802.pdf,%20arXiv:1609.048 02v5%20%5bcs.CV%5d%2025%20May%202017.pdf, arXiv:1609. 04802v5 [cs.CV] 25 May 2017
[23] Progressive Growing GAN (GAN is gradually learned from low to high resolution), https://ml-dnn.tistory.com/8, (Accessed on Novem- ber 19, 2020)
[24] Adaptive Instance Normalization (AdaIN), https://m.blog.naver.com/ PostView.nhn?blogId=tlqordl89&logNo=221536378926&proxyReferer=https:%2F%2Fwww.google.com%2F, (Accessed on November 19, 2020)
[25] Figurative art, https://en.wikipedia.org/wiki/Figurative_art, (Accessed on February 10, 2021)
[26] Neural Filters, https://helpx.adobe.com/kr/photoshop/using/neural-filters. html, (Accessed on January 07, 2021)

Comment


Editorial Office
1108, New building, 22, Teheran-ro 7-gil, Gangnam-gu, Seoul, Korea
Homepage: www.kibme.org TEL: +82-2-568-3556 FAX: +82-2-568-3557
Copyrightⓒ 2012 The Korean Institute of Broadcast and Media Engineers
All Rights Reserved