[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

AnimeGANv2

「Open Source」. The improved version of AnimeGAN.
Project Page」 | Landscape photos/videos to anime

News
(2020.12.25) AnimeGANv3 will be released along with its paper in the spring of 2021.
(2021.02.21) The pytorch version of AnimeGANv2 has been released, Be grateful to @bryandlee for his contribution.

Focus:

Anime style Film Picture Number Quality Download Style Dataset
Miyazaki Hayao The Wind Rises 1752 1080p Link
Makoto Shinkai Your Name & Weathering with you 1445 BD
Kon Satoshi Paprika 1284 BDRip

     Different styles of training have different loss weights!

News:

The improvement directions of AnimeGANv2 mainly include the following 4 points:  
  • 1. Solve the problem of high-frequency artifacts in the generated image.

  • 2. It is easy to train and directly achieve the effects in the paper.

  • 3. Further reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.

  • 4. Use new high-quality style data, which come from BD movies as much as possible.

          AnimeGAN can be accessed from here.


Requirements

  • python 3.6
  • tensorflow-gpu
    • tensorflow-gpu 1.8.0 (ubuntu, GPU 1080Ti or Titan xp, cuda 9.0, cudnn 7.1.3)
    • tensorflow-gpu 1.15.0 (ubuntu, GPU 2080Ti, cuda 10.0.130, cudnn 7.6.0)
  • opencv
  • tqdm
  • numpy
  • glob
  • argparse

Usage

1. Download vgg19

vgg19.npy

2. Download Train/Val Photo dataset

Link

3. Do edge_smooth

python edge_smooth.py --dataset Hayao --img_size 256

4. Calculate the three-channel(BGR) color difference

python data_mean.py --dataset Hayao

5. Train

python main.py --phase train --dataset Hayao --data_mean [13.1360,-8.6698,-4.4661] --epoch 101 --init_epoch 10
For light version: python main.py --phase train --dataset Hayao --data_mean [13.1360,-8.6698,-4.4661] --light --epoch 101 --init_epoch 10

6. Extract the weights of the generator

python get_generator_ckpt.py --checkpoint_dir ../checkpoint/AnimeGAN_Hayao_lsgan_300_300_1_2_10_1 --style_name Hayao

7. Inference

python test.py --checkpoint_dir checkpoint/generator_Hayao_weight --test_dir dataset/test/HR_photo --style_name Hayao/HR_photo

8. Convert video to anime

python video2anime.py --video video/input/お花見.mp4 --checkpoint_dir checkpoint/generator_Paprika_weight


Results


😍 Photo to Paprika Style













😍 Photo to Hayao Style













😍 Photo to Shinkai Style











License

This repo is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications. Permission is granted to use the AnimeGANv2 given that you agree to my license terms. Regarding the request for commercial use, please contact us via email to help you obtain the authorization letter.

Author

Xin Chen

Owner
Comments
  • Training code coming soon?

    Training code coming soon?

    Hello Tachibana san, Great work and congratulations on the work on AnimeGANv2. I had good success in converting the models and running the models on Android. However the latency is still an issue, it takes about 500 ms to run a 128x128 patch of image using Tensorflow Android(I tried tflite but it increases the inference time strangely.) I want to modify the network architecture and optimize its performance further to make it a real-time application (under 100 ms) So to cut a long story short, are you planning to release the training code in near future? :)

    Thank you.

  • Strange G_vgg loss curve

    Strange G_vgg loss curve

    Hello, thank you for posting this great work!

    I have retrained the model with a customized dataset, the results look great but the loss curves seem strange to me.

    image

    The adversary loss seems ok, I set the weights for D and G to 200 and 300, respectively, and the losses are approaching the equilibrium.

    However, the G_vgg loss, which consists of c_loss, s_loss, color_loss, tv_loss, reaches the bottom at around epoch 30 and then starts increasing. I looked at each individual loss among G_vgg_loss, only the s_loss is decreasing over time, and all others starts increasing after epoch 30. image

    Interestingly, the validation samples from epoch 100 is apparently better than the ones from epoch 30. Does anyone experience the same?

  • Run on CPU insted of GPU

    Run on CPU insted of GPU

    Hi, I Try To run this Project But I have little problem and when I try to start train phase this code is just only run on cpu but I install cudatoolkit and tensorflow-gpu. can u help me ?

    2020-12-06 18:00:45.278352: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA     [693/1792]
    2020-12-06 18:00:45.302721: E tensorflow/stream_executor/cuda/cuda_driver.cc:397] failed call to cuInit: CUDA_ERROR_NO_DEVICE                                                              
    2020-12-06 18:00:45.302766: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: host-name
    2020-12-06 18:00:45.302775: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:165] hostname: host-name
    2020-12-06 18:00:45.302816: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] libcuda reported version is: 450.66.0
    2020-12-06 18:00:45.302853: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:193] kernel reported version is: 450.66.0                                                                 2020-12-06 18:00:45.302863: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:300] kernel version seems to match DSO: 450.66.0
    
    npy file loaded -------  vgg19_weight/vgg19.npy
    ##### Information #####
    # gan type :  lsgan
    # light :  False
    # dataset :  Hayao
    # max dataset number :  6656
    # batch_size :  12
    # epoch :  101
    # init_epoch :  10
    # training image size [H, W] :  [256, 256]
    # g_adv_weight,d_adv_weight,con_weight,sty_weight,color_weight,tv_weight :  300.0 300.0 1.5 2.5 10.0 1.0
    # init_lr,g_lr,d_lr :  0.0002 2e-05 4e-05
    # training_rate G -- D: 1 : 1
    build model finished: 0.138872s
    build model finished: 0.130571s
    build model finished: 0.120662s
    build model finished: 0.127711s
    build model finished: 0.123440s
    G:
    ---------
    Variables: name (type shape) [size]
    ---------
    generator/G_MODEL/A/Conv/weights:0 (float32_ref 7x7x3x32) [4704, bytes: 18816]
    generator/G_MODEL/A/LayerNorm/beta:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/A/LayerNorm/gamma:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/A/Conv_1/weights:0 (float32_ref 3x3x32x64) [18432, bytes: 73728]
    generator/G_MODEL/A/LayerNorm_1/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/LayerNorm_1/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/Conv_2/weights:0 (float32_ref 3x3x64x64) [36864, bytes: 147456]
    generator/G_MODEL/A/LayerNorm_2/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/LayerNorm_2/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/B/Conv/weights:0 (float32_ref 3x3x64x128) [73728, bytes: 294912]
    generator/G_MODEL/B/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/B/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/B/Conv_1/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/B/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]                                                                                                        [649/1792]generator/G_MODEL/B/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/Conv/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/C/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/r1/Conv/weights:0 (float32_ref 1x1x128x256) [32768, bytes: 131072]
    generator/G_MODEL/C/r1/LayerNorm/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/LayerNorm/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/r1/w:0 (float32_ref 3x3x256x1) [2304, bytes: 9216]
    generator/G_MODEL/C/r1/r1/bias:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/1/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/1/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/Conv_1/weights:0 (float32_ref 1x1x256x256) [65536, bytes: 262144]
    generator/G_MODEL/C/r1/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r2/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r2/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/r2/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r2/r2/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r2/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r2/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r3/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r3/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/r3/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r3/r3/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r3/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r3/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r4/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r4/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/r4/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r4/r4/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r4/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r4/2/gamma:0 (float32_ref 256) [256, bytes: 1024]                                                                                                             [605/1792]generator/G_MODEL/C/Conv_1/weights:0 (float32_ref 3x3x256x128) [294912, bytes: 1179648]
    generator/G_MODEL/C/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/Conv/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/D/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/Conv_1/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/D/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/E/Conv/weights:0 (float32_ref 3x3x128x64) [73728, bytes: 294912]
    generator/G_MODEL/E/LayerNorm/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/LayerNorm/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/Conv_1/weights:0 (float32_ref 3x3x64x64) [36864, bytes: 147456]
    generator/G_MODEL/E/LayerNorm_1/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/LayerNorm_1/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/Conv_2/weights:0 (float32_ref 7x7x64x32) [100352, bytes: 401408]
    generator/G_MODEL/E/LayerNorm_2/beta:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/E/LayerNorm_2/gamma:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/out_layer/Conv/weights:0 (float32_ref 1x1x32x3) [96, bytes: 384]
    Total size of variables: 2143552
    Total bytes of variables: 8574208
     [*] Reading checkpoints...
     [*] Failed to find a checkpoint
     [!] Load failed...
    Epoch:   0 Step:     0 /   554  time: 80.342747 s init_v_loss: 592.22143555  mean_v_loss: 592.22143555
    
  • Can I train a model by using multiple GPUs?

    Can I train a model by using multiple GPUs?

    Thank you for your awesome project. I think if i can using mulitple GPUs to traning, it's making things be more efficient. Hope to get some advice from you. Thanks.

  • typo on the folders

    typo on the folders

    Hello Author! your folder naming has a typo though It's kinda annoying to rename it again and again cuz I'm using google colab cuz I tried to use shinkai image image

  • issue saving checkpoints of model

    issue saving checkpoints of model

    hello! when I try to train the model, I get the following error when the code tries to save the checkpoint:

    Traceback (most recent call last):
      File "main.py", line 115, in <module>
        main()
      File "main.py", line 107, in main
        gan.train()
      File "/content/drive/.shortcut-targets-by-id/1X8hfrOWE2KxmaJG4LFKH9ydVQ4BA7oyZ/cs7643-final-project/AnimeGANv2.py", line 302, in train
        self.save(self.checkpoint_dir, epoch)
      File "/content/drive/.shortcut-targets-by-id/1X8hfrOWE2KxmaJG4LFKH9ydVQ4BA7oyZ/cs7643-final-project/AnimeGANv2.py", line 341, in save
        self.saver.save(self.sess, os.path.join(checkpoint_dir, self.model_name + '.model'), global_step=step)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/saver.py", line 1186, in save
        save_relative_paths=self._save_relative_paths)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/checkpoint_management.py", line 231, in update_checkpoint_state_internal
        last_preserved_timestamp=last_preserved_timestamp)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/checkpoint_management.py", line 110, in generate_checkpoint_state_proto
        model_checkpoint_path = os.path.relpath(model_checkpoint_path, save_dir)
      File "/usr/lib/python3.7/posixpath.py", line 475, in relpath
        start_list = [x for x in abspath(start).split(sep) if x]
      File "/usr/lib/python3.7/posixpath.py", line 383, in abspath
        cwd = os.getcwd()
    FileNotFoundError: [Errno 2] No such file or directory
    

    I mounted my google drive into colab and am using colab to train the model. When I check my checkpoint folder, I have two files there but it appears that I am missing the checkpoint binary file and the .meta file. Any idea why this could be happening? image

  • Cannot understand rgb2yuv function code

    Cannot understand rgb2yuv function code

    def rgb2yuv(rgb):
        """
        Convert RGB image into YUV https://en.wikipedia.org/wiki/YUV
        """
        rgb = (rgb + 1.0)/2.0
        return tf.image.rgb_to_yuv(rgb)
    

    tf.image.rgb_to_yuv(rgb) do the op: rgb_to_yuv so,I can't understand what this line of code means: "rgb = (rgb + 1.0)/2.0"

  • What attributed to the better performance of the model compared to your earlier model?

    What attributed to the better performance of the model compared to your earlier model?

    Hi, thanks for sharing your work. What in your opinion was the key to achieving better performance compared to your earlier model (v1) and/or other models?

    I've roughly seen the code of this repo but I can't figure it out.

  • How to use 512 x 512 or higher-definition pictures for training

    How to use 512 x 512 or higher-definition pictures for training

    I want to use 512 x 512 or higher resolution images, my plan is as follows:

    1. ffmpeg extracts 1080 * 1080 pictures, then sacle to 512 x 512
    2. python edge_smooth.py --dataset xxxx --img_size 512
    3. python train.py --dataset xxxx --epoch 101 --init_epoch 10 But I see that the pictures in train_photo under the dataset will also be used for training, so do the pictures in train_photo need to be updated to 512 x 512?
  • Could you share how you get the improvements that you mentioned in the readme?

    Could you share how you get the improvements that you mentioned in the readme?

    Hi, Could you share how you get these 3 improvements that you mentioned in the readme?


    1. Solve the problem of high-frequency artifacts in the generated image.

    2. It is easy to train and directly achieve the effects in the paper.

    3. Further reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.


  • How to train face model?

    How to train face model?

    Is the training method the same as the training landscape photos? The difference is the data set. As long as the human face data is aligned, plus the animation face alignment, is that right?

  • Exception has occurred: OperatorNotAllowedInGraphError

    Exception has occurred: OperatorNotAllowedInGraphError

    A OperatorNotAllowedInGraphError throw out when I train AnimeGAN, witch I choose 'wgan-gp' as the defualt training gan. Exception trace log as follows:

    ['', '/usr/lib/python37.zip', '/usr/lib/python3.7', '/usr/lib/python3.7/lib-dynload', '/usr/local/lib/python3.7/dist-packages', '/usr/lib/python3/dist-packages', '/hy-tmp/AnimeGANv2-master/tools/..']
    init test
    WARNING:tensorflow:From main.py:111: The name tf.GPUOptions is deprecated. Please use tf.compat.v1.GPUOptions instead.
    
    WARNING:tensorflow:From main.py:112: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
    
    WARNING:tensorflow:From main.py:112: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
    
    2022-03-24 14:53:45.629430: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
    2022-03-24 14:53:45.660248: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2400000000 Hz
    2022-03-24 14:53:45.660960: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55c658ac42d0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
    2022-03-24 14:53:45.660998: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
    2022-03-24 14:53:45.665129: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
    2022-03-24 14:53:45.934579: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55c658b497a0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
    2022-03-24 14:53:45.934642: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Tesla V100-SXM2-16GB, Compute Capability 7.0
    2022-03-24 14:53:45.935606: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Found device 0 with properties: 
    name: Tesla V100-SXM2-16GB major: 7 minor: 0 memoryClockRate(GHz): 1.53
    pciBusID: 0000:8d:00.0
    2022-03-24 14:53:45.936059: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
    2022-03-24 14:53:45.938338: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
    2022-03-24 14:53:45.940088: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
    2022-03-24 14:53:45.940606: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
    2022-03-24 14:53:45.944158: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
    2022-03-24 14:53:45.946792: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
    2022-03-24 14:53:45.951124: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
    2022-03-24 14:53:45.952244: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1767] Adding visible gpu devices: 0
    2022-03-24 14:53:45.952294: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
    2022-03-24 14:53:45.957530: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1180] Device interconnect StreamExecutor with strength 1 edge matrix:
    2022-03-24 14:53:45.957555: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1186]      0 
    2022-03-24 14:53:45.957564: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1199] 0:   N 
    2022-03-24 14:53:45.958626: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1325] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15061 MB memory) -> physical GPU (device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:8d:00.0, compute capability: 7.0)
    WARNING:tensorflow:From /hy-tmp/AnimeGANv2-master/AnimeGANv2.py:55: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
    
    npy file loaded -------  vgg19_weight/vgg19.npy
    
    ##### Information #####
    # gan type :  wgan-gp
    # light :  False
    # dataset :  Paprika
    # max dataset number :  6656
    # batch_size :  20
    # epoch :  500
    # init_epoch :  0
    # training image size [H, W] :  [256, 256]
    # g_adv_weight,d_adv_weight,con_weight,sty_weight,color_weight,tv_weight :  300.0 300.0 2.0 0.6 50.0 0.1
    # init_lr,g_lr,d_lr :  0.0002 8e-05 0.00016
    # training_rate G -- D: 1 : 1
    
    WARNING:tensorflow:From /hy-tmp/AnimeGANv2-master/AnimeGANv2.py:96: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
    
    WARNING:tensorflow:
    The TensorFlow contrib module will not be included in TensorFlow 2.0.
    For more information, please see:
      * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
      * https://github.com/tensorflow/addons
      * https://github.com/tensorflow/io (for I/O related ops)
    If you depend on functionality not listed there, please file an issue.
    
    WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow_core/contrib/layers/python/layers/layers.py:1057: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
    Instructions for updating:
    Please use `layer.__call__` method instead.
    WARNING:tensorflow:From /hy-tmp/AnimeGANv2-master/net/generator.py:41: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.
    
    WARNING:tensorflow:From /hy-tmp/AnimeGANv2-master/net/generator.py:58: The name tf.image.resize_images is deprecated. Please use tf.image.resize instead.
    
    WARNING:tensorflow:From /hy-tmp/AnimeGANv2-master/AnimeGANv2.py:122: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
    
    Traceback (most recent call last):
      File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
        "__main__", mod_spec)
      File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/root/.vscode-server/extensions/ms-python.python-2022.2.1924087327/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module>
        cli.main()
      File "/root/.vscode-server/extensions/ms-python.python-2022.2.1924087327/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 444, in main
        run()
      File "/root/.vscode-server/extensions/ms-python.python-2022.2.1924087327/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 285, in run_file
        runpy.run_path(target_as_str, run_name=compat.force_str("__main__"))
      File "/usr/lib/python3.7/runpy.py", line 263, in run_path
        pkg_name=pkg_name, script_name=fname)
      File "/usr/lib/python3.7/runpy.py", line 96, in _run_module_code
        mod_name, mod_spec, pkg_name, script_name)
      File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "main.py", line 131, in <module>
        main()
      File "main.py", line 117, in main
        gan.build_model()
      File "/hy-tmp/AnimeGANv2-master/AnimeGANv2.py", line 156, in build_model
        GP = self.gradient_panalty(real=self.anime, fake=self.generated)
      File "/hy-tmp/AnimeGANv2-master/AnimeGANv2.py", line 125, in gradient_panalty
        logit, _= self.discriminator(interpolated, reuse=True, scope=scope)
      File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/ops.py", line 547, in __iter__
        self._disallow_iteration()
      File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/ops.py", line 543, in _disallow_iteration
        self._disallow_in_graph_mode("iterating over `tf.Tensor`")
      File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/ops.py", line 523, in _disallow_in_graph_mode
        " this function with @tf.function.".format(task))
    tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
    

    I'm not familiar with tensorflow 1.x, does any body can help fix this issus.

A highly efficient, fast, powerful and light-weight anime downloader and streamer for your favorite anime.
A highly efficient, fast, powerful and light-weight anime downloader and streamer for your favorite anime.

AnimDL - Download & Stream Your Favorite Anime AnimDL is an incredibly powerful tool for downloading and streaming anime. Core features Abuses the dev

Dec 2, 2022
Little tool in python to watch anime from the terminal (the better way to watch anime)

ani-cli Script working again :), thanks to the fork by Dink4n for the alternative approach to by pass the captcha on gogoanime A cli to browse and wat

Nov 28, 2022
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization
FuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space OptimizationFuseDream: Training-Free Text-to-Image Generationwith Improved CLIP+GAN Space Optimization

FuseDream This repo contains code for our paper (paper link): FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimizat

Nov 11, 2022
Code for visualizing the loss landscape of neural nets
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Nov 26, 2022
NLMpy - A Python package to create neutral landscape models
NLMpy - A Python package to create neutral landscape models

NLMpy is a Python package for the creation of neutral landscape models that are widely used by landscape ecologists to model ecological patterns

Oct 8, 2022
The PyTorch improved version of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.
The PyTorch improved version of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.

Face Alignment in Full Pose Range: A 3D Total Solution By Jianzhu Guo. [Updates] 2020.8.30: The pre-trained model and code of ECCV-20 are made public

Dec 2, 2022
Generative Exploration and Exploitation - This is an improved version of GENE.
Generative Exploration and Exploitation - This is an improved version of GENE.

GENE This is an improved version of GENE. In the original version, the states are generated from the decoder of VAE. We have to check whether the gere

Mar 23, 2022
PaddleRobotics is an open-source algorithm library for robots based on Paddle, including open-source parts such as human-robot interaction, complex motion control, environment perception, SLAM positioning, and navigation.

简体中文 | English PaddleRobotics paddleRobotics是基于paddle的机器人开源算法库集,包括人机交互、复杂运动控制、环境感知、slam定位导航等开源算法部分。 人机交互 主动多模交互技术TFVT-HRI 主动多模交互技术是通过视觉、语音、触摸传感器等输入机器人

Nov 26, 2022
EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos.
EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos.

EasyMocap is an open-source toolbox for markerless human motion capture from RGB videos. In this project, we provide the basic code for fitt

Dec 5, 2022
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot
GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot

GPT-Code-Clippy (GPT-CC) is an open source version of GitHub Copilot, a language model -- based on GPT-3, called GPT-Codex -- that is fine-tuned on publicly available code from GitHub.

Nov 27, 2022
Neural network for recognizing the gender of people in photos
Neural network for recognizing the gender of people in photos

Neural Network For Gender Recognition How to test it? Install requirements.txt file using pip install -r requirements.txt command Run nn.py using pyth

Sep 18, 2022
A little Python application to auto tag your photos with the power of machine learning.
A little Python application to auto tag your photos with the power of machine learning.

Tag Machine A little Python application to auto tag your photos with the power of machine learning. Report a bug or request a feature Table of Content

Nov 11, 2022
Synthesize photos from PhotoDNA using machine learning 🌱
Synthesize photos from PhotoDNA using machine learning 🌱

Ribosome Synthesize photos from PhotoDNA. See the blog post for more information. Installation Dependencies You can install Python dependencies using

Nov 23, 2022
A PaddlePaddle version of Neural Renderer, refer to its PyTorch version
A PaddlePaddle version of Neural Renderer, refer to its PyTorch version

Neural 3D Mesh Renderer in PadddlePaddle A PaddlePaddle version of Neural Renderer, refer to its PyTorch version Install Run: pip install neural-rende

Jul 12, 2022
For holding anime-related object classification and detection models

Animesion An end-to-end framework for anime-related object classification, detection, segmentation, and other models. Update: 01/22/2020. Due to time-

Nov 30, 2022
StyleGAN2 Webtoon / Anime Style Toonify
 StyleGAN2 Webtoon / Anime Style Toonify

StyleGAN2 Webtoon / Anime Style Toonify Korea Webtoon or Japanese Anime Character Stylegan2 base high Quality 1024x1024 / 512x512 Generate and Transfe

Nov 29, 2022
A sketch extractor for anime/illustration.
A sketch extractor for anime/illustration.

Anime2Sketch Anime2Sketch: A sketch extractor for illustration, anime art, manga By Xiaoyu Xiang Updates 2021.5.2: Upload more example results of anim

Dec 4, 2022