Skip to content

mafiosnik777/enhancr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

heading-icon

enhancr is an elegant and easy to use GUI for Video Frame Interpolation and Video Upscaling which takes advantage of artificial intelligence - built using node.js and Electron. It was created to enhance the user experience for anyone interested in enhancing video footage using artificial intelligence. The GUI was designed to provide a stunning experience powered by state-of-the-art technologies without feeling clunky and outdated like other alternatives.

gui-preview-image

It features blazing-fast TensorRT inference by NVIDIA, which can speed up AI processes significantly. Pre-packaged, without the need to install Docker or WSL (Windows Subsystem for Linux) - and NCNN inference by Tencent which is lightweight and runs on NVIDIA, AMD and even Apple Silicon - in contrast to the mammoth of an inference PyTorch is, which only runs on NVIDIA GPUs.

Features

  • Encodes video on the fly and reads frames from source video, without the need of extracting frames or loading into memory
  • Queue for batch processing
  • Live Preview integrated in the UI, without impact on performance
  • Allows chaining of interpolation, upscaling & restoration
  • Offers the possibility to trim videos before processing
  • Can load custom ESRGAN models in onnx & pth format and converts them automatically
  • Has Scene Detection built-in, to skip interpolation on scene change frames & mitigate artifacts
  • Color Themes for user customization
  • Discord Rich Presence, to show all your friends progress, current speed & what you're currently enhancing
  • Realtime Player (assuming you have a powerful enough GPU) with perfect support for audio, subtitles, fonts, attachments etc.
  • ... and much more

Installation

Release 0.9.9 features a free version πŸŽ‰ https://dl.enhancr.app/setup/enhancr-setup-free-0.9.9.exe

To ensure that you have the most recent version of the software and all necessary dependencies, we recommend downloading the installer from Patreon. Please note that builds and an embeddable python environment for the Pro version are not provided through this repository.

installer

Built-in engines

Interpolation

RIFE (NCNN) - megvii-research/ECCV2022-RIFE - powered by styler00dollar/VapourSynth-RIFE-NCNN-Vulkan

RIFE (TensorRT) - megvii-research/ECCV2022-RIFE - powered by AmusementClub/vs-mlrt & styler00dollar/VSGAN-tensorrt-docker

GMFSS - Union (PyTorch/TensorRT) - 98mxr/GMFSS_Union - powered by HolyWu/vs-gmfss_union

GMFSS - Fortuna (PyTorch/TensorRT) - 98mxr/GMFSS_Fortuna - powered by HolyWu/vs-gmfss_fortuna

CAIN (NCNN) - myungsub/CAIN - powered by mafiosnik/vsynth-cain-NCNN-vulkan (unreleased)

CAIN (DirectML) - myungsub/CAIN - powered by AmusementClub/vs-mlrt

CAIN (TensorRT) - myungsub/CAIN - powered by HubertSotnowski/cain-TensorRT

Upscaling

ShuffleCUGAN (NCNN) - styler00dollar/VSGAN-tensorrt-docker - powered by AmusementClub/vs-mlrt

ShuffleCUGAN (TensorRT) - styler00dollar/VSGAN-tensorrt-docker - powered by AmusementClub/vs-mlrt

RealESRGAN (NCNN) - xinntao/Real-ESRGAN - powered by AmusementClub/vs-mlrt

RealESRGAN (DirectML) - xinntao/Real-ESRGAN - powered by AmusementClub/vs-mlrt

RealESRGAN (TensorRT) - xinntao/Real-ESRGAN - powered by AmusementClub/vs-mlrt

RealCUGAN (TensorRT) - bilibili/ailab/Real-CUGAN - powered by AmusementClub/vs-mlrt

SwinIR (TensorRT) - JingyunLiang/SwinIR - powered by mafiosnik777/SwinIR-TensorRT (unreleased)

Restoration

DPIR (DirectML) - cszn/DPIR - powered by AmusementClub/vs-mlrt

DPIR (TensorRT) - cszn/DPIR - powered by AmusementClub/vs-mlrt

SCUNet (TensorRT) - cszn/SCUNet - powered by mafiosnik777/SCUNet-TensorRT (unreleased)

System Requirements

Minimum:

  • Dual Core CPU with Hyperthreading enabled
  • Vulkan-capable graphics processor for inference with NCNN / DirectX 12-capable graphics processor for inference with DirectML
  • Windows 10

Recommended:

  • Quad Core Intel Kaby Lake/AMD Ryzen or newer with Hyperthreading enabled
  • 16 GB RAM
  • NVIDIA 2000 Series (Ampere) for TensorRT
  • Windows 11

Sidenote: Starting with TensorRT 8.6, support for 2nd generation Kepler and Maxwell (900 Series and below) has been dropped. You will need at least a Pascal GPU (1000 series and up) and CUDA 12.0 + driver version >= 525.xx to run inference using TensorRT.

macOS and Linux Support

The GUI was created with cross-platform compatibility in mind and is compatible with both operating systems. Our primary focus at the moment is ensuring a stable and fully functioning solution for Windows users, but support for Linux and macOS will be made available with the 1.0 update.

enhancr-macos

Support for Apple Silicon is planned as well, but I currently only have an Intel Macbook Pro available for testing i'll get a Apple Silicon instance on Amazon AWS to implement this, in time for the 1.0 release.

Benchmarks

Input size: 1920x1080 @ 2x

RTX 2060S 1 RTX 3070 2 RTX A4000 3 RTX 3090 Ti 4 RTX 4090 5
RIFE / rife-v4.6 (NCNN) 53.78 fps 64.08 fps 80.56 fps 86.24 fps 136.13 fps
RIFE / rife-v4.6 (TensorRT) 70.34 fps 94.63 fps 86.47 fps 122.68 fps 170.91 fps
CAIN / cvp-v6 (NCNN) 9.42 fps 10.56 fps 13.42 fps 17.36 fps 44.87 fps
CAIN / cvp-v6 (TensorRT) 45.41 fps 63.84 fps 81.23 fps 112.87 fps 183.46 fps
GMFSS / Up (PyTorch) - - 4.32 fps - 16.35 fps
GMFSS / Union (PyTorch) - - 3.68 fps - 13.93 fps
GMFSS / Union (TensorRT) - - 6.79 fps - -
RealESRGAN / animevideov3 (TensorRT) 7.64 fps 9.10 fps 8.49 fps 18.66 fps 38.67 fps
RealCUGAN (TensorRT) - - 5.96 fps - -
SwinIR (PyTorch) - - 0.43 fps - -
DPIR / Denoise (TensorRT) 4.38 fps 6.45 fps 5.39 fps 11.64 fps 27.41 fps

1 Ryzen 5 3600X - Gainward RTX 2060 Super @ Stock

2 Ryzen 7 3800X - Gigabyte RTX 3070 Eagle OC @ Stock

3 Ryzen 5 3600X - PNY RTX A4000 @ Stock

4 i9 12900KF - ASUS RTX 3090 Ti Strix OC @ ~2220MHz

5 Ryzen 9 5950X - ASUS RTX 4090 Strix OC - @ ~3100MHz with curve to achieve maximum performance

Troubleshooting and FAQ (Frequently Asked Questions)

This section has moved to the wiki: https://github.com/mafiosnik777/enhancr/wiki

Check it out to learn more about getting the most out of enhancr or how to fix various problems.

Inferences

TensorRT is a highly optimized AI inference runtime for NVIDIA GPUs. It uses benchmarking to find the optimal kernel to use for your specific GPU, and there is an extra step to build an engine on the machine you are going to run the AI on. However, the resulting performance is also typically much much better than any PyTorch or NCNN implementation.

NCNN is a high-performance neural network inference computing framework optimized for mobile platforms. NCNN does not have any third party dependencies. It is cross-platform, and runs faster than all known open source frameworks on most major platforms. It supports NVIDIA, AMD, Intel Graphics and even Apple Silicon. NCNN is currently being used in many Tencent applications, such as QQ, Qzone, WeChat, Pitu and so on.

Supporting this project

I would be grateful if you could show your support for this project by contributing on Patreon or through a donation on PayPal. Your support will help to accelerate development and bring more updates to the project. Additionally, if you have the skills, you can also contribute by opening a pull request. Regardless of the form of support you choose to give, know that it is greatly appreciated.

Plans for the future

I am continuously working to improve the codebase, including addressing any inconsistencies that may have arisen due to time constraints. Regular updates will be released, including new features, bug fixes, and the incorporation of new technologies and models as they become available. Thank you for your understanding and support.

Credits

Our player depends on mpv and ModernX for the OSC.

Thanks to HubertSontowski and styler00dollar for helping out with implementing CAIN.

Join the discord

To interact with the community, share your results or to get help when encountering any problems visit our discord. Previews of upcoming versions are gonna be showcased on there as well.