Skip to content

Instantly share code, notes, and snippets.

@simonsan
Last active April 17, 2020 22:54
Show Gist options
  • Select an option

  • Save simonsan/d5598ae53fc2c2c4093b7fd52a1bc98c to your computer and use it in GitHub Desktop.

Select an option

Save simonsan/d5598ae53fc2c2c4093b7fd52a1bc98c to your computer and use it in GitHub Desktop.
Upscaling Information and Workflow
From 2c6dbb25bb8bbd57c8e14fe3e9f6abfbd09a6545 Mon Sep 17 00:00:00 2001
From: heinezen <heinezen@hotmail.de>
Date: Fri, 17 Apr 2020 04:07:07 +0200
Subject: [PATCH] output single frames
---
openage/convert/texture.py | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)
diff --git a/openage/convert/texture.py b/openage/convert/texture.py
index 93d05c336..d69c87c11 100644
--- a/openage/convert/texture.py
+++ b/openage/convert/texture.py
@@ -144,8 +144,7 @@ def __init__(self, input_data, main_palette=None,
raise Exception("cannot create Texture "
"from unknown source type: %s" % (type(input_data)))
- self.image_data, (self.width, self.height), self.image_metadata\
- = merge_frames(frames)
+ self.image_data = frames
def _slp_to_subtextures(self, frame, main_palette, player_palette=None,
custom_cutter=None):
@@ -209,15 +208,20 @@ def save(self, targetdir, filename, meta_formats=None):
# without the dot
ext = ext[1:]
- # generate PNG file
- with targetdir[filename].open("wb") as imagefile:
- self.image_data.get_pil_image().save(imagefile, ext)
+ index = 0
+ for frame in self.image_data:
+ output_name = "%s_%s.%s" % (basename, str(index), ext)
+ # generate PNG file
+ with targetdir[output_name].open("wb") as imagefile:
+ frame.get_pil_image().save(imagefile, ext)
- if meta_formats:
- # generate formatted texture metadata
- formatter = data_formatter.DataFormatter()
- formatter.add_data(self.dump(basename))
- formatter.export(targetdir, meta_formats)
+ if meta_formats:
+ # generate formatted texture metadata
+ formatter = data_formatter.DataFormatter()
+ formatter.add_data(self.dump(basename))
+ formatter.export(targetdir, meta_formats)
+
+ index += 1
def dump(self, filename):
return [data_definition.DataDefinition(self,

Ok, so let me describe my current workflow for others if they want to replicate:

  • I use ESRGAN (with CUDA [use graphics card])in combination with Image Enhancing Utility (GUI-Tool on Windows):

  • The Models come from this database: https://upscale.wiki/wiki/Model_Database

  • My current workflow (updated correspondigly):

    • Put images you want to scale into the ESRGAN input folder
    • Open IEU -> Settings tab
      • Check that your folder structure is setup right on top)
      • For the moment the only settings I changed is max tile WxH (according to the max VRAM of your graphics card)
      • Image Preprocess -> Reduce Noise (Enhance)
    • IEU -> Basic tab
      • Output mode: Folder for each image
      • Profile Global (for now to figure out different settings)
      • Check the models you want to use
      • and click SPLIT-ESRGAN-MERGE

What will happen is: - The images will be separated from their alpha channels - tiles will be created as the algorithm runs over it - the images will be put into an output folder as of their filename=foldername and the images are named after the model that was used

This is pretty cool for the moment as you can separate the good models from the bad as you look through the folder. Furthermore you can see directly which models you could combine or which images you can use to interpolate.

Now comes the hard part. Look through this folder and check which images could be combined to create a better looking version of the both single images. If you find one perfect one (rarely happening) you're lucky.

Open the window of IEU again and click on the image interpolation tab:

  • I move the image with the lesser details/less sharp but overall better looking surfaces etc to the left (take a look at this, this will get important later) (4x_Fatality_01_265000_G.pth, 4x_deviantPixelHD_250000.pth, 4x_DigiPaint35000.pth are often a good choice here)

  • I move the more sharp image/image with greater details to the right (mostly the output of RRDB_ESRGAN_x4_old_arch.pth, 4x_Manga109Attempt.pth, 4x_falcoon300.pth or 4x_4xBox.pth)

  • Then I put the slider onto values starting at 25, 35, 45, 55, 65, up until 70 and interpolate these images (Click on the right on Interpolate)

    • The slider decide with how much opacity the second item vill overlay the first one (keep that in mind while playing around)
  • After every new interpolated image I look for the output and check if there needs to be done more

  • Sort out the good images you find to a Global GOOD folder and share them here with the settings (Model name(s), Scale, Value for interpolation, etc.) so people are able to recreate what you've did and maybe even improve your work

Next step:

  • start working with animations and let the openage converter export single frames into a folder instead of spritesheets
    • Scroll down for the patch
    • download that and apply it to the python/convert/texture.py file
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment