Skip to content

aws-samples/video-super-resolution-tool

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

AWS Batch Video Super-Resolution powered by the Intel® Library for Video Super Resolution

Introduction

Implementing Super-resolution based on the enhanced RAISR algorithm utilizing Intel AVX-512 requires AWS-specific instance types, such as c5.2xlarge, c6i.2xlarge, and c7i.2xlarge. We leverage AWS Batch to compute jobs and automate the entire pipeline rather than dealing with all the underlying infrastructure, including start and stop EC2 instances. Therefore, AWS customers interested in using the benefits of the enhanced RAISR algorithm for super-resolution can continue focusing on the ABR transcoding pipeline and adapt their existing workflow to leverage AWS Batch as a preprocessing stage. The first step in this process is to create a compute environment in AWS Batch, where CPU requirements are defined, including the type of EC2 instance allowed. The second step regards to define a job queue associated with a proper computing environment. Each job submitted in this queue will be executed using previously defined EC2 instances. The third step involves the definition of a job. At this point, it is necessary to have a docker image registered in the AWS Elastic Container Register ECR. Building a custom docker image is further detailed in section Extend the solution. The image building process includes installing Intel® Library for Video Super Resolution, open-source ffmpeg tool, and AWS CLI to perform API calls to S3 buckets. Once the job is properly defined (image registered in ECR), Jobs can start being submitted into a queue.

Disclaimer And Data Privacy Notice

When you deploy this solution, scripts will download different packages with different licenses from various sources. These sources are not controlled by the developer of this script. Additionally, this script can create a non-free and un-redistributable binary. By deploying and using this solution, you are fully aware of this.

Architecture

Architecture

Deploy using cloudformation

Bellow are described the steps to deploy the proposed solution:

  1. Download template.yml
  2. Go to CloudFormation from AWS Console to create a new stack using template.yml
  3. The template allows definition of next parameters :
    • Memory : Memory associated to job definition. This value can be overwritten when jobs are submitted
    • Subnet: AWS Batch will deploy proper EC2 instance types ( c5.2xlarge, c6i.2xlarge, and c7i.2xlarge) in selected customer subnet with Internet access
    • VPCName: Existing VPC where selected Subnet is associated
    • VSRImage: This field use an existing public image but customer can create their own image. Instructions to create custom image are found here
    • VCPU: VCPU associated to the job definition. This value can be overwritten when jobs are submitted
  4. After deploying, verify that two s3 buckets have been created. They start with vsr-input and vsr-output
  5. Upload a SD file to vsr-input-xxxx-{region-name} bucket
  6. Go to Batch from AWS console and validate a new queue (queue-vsr) and compute environment (VideoSuperResolution) have been created
  7. Inside Jobs (left-side) click on "submit" a new job, selecting the proper job definition (vsr-jobDefiniton-xxxx) and queue (queue-vsr)
  8. In the next screen, click on "Load from job definition" and modify the name of input and output files
  9. Review and submit the job and wait until status transitions to runnable and then Succeeded
  10. Go to output S3 bucket (vsr-output-xxxx-{region-name}) to validate a Super-resolution file has been created and uploaded to S3 automatically
  11. Compare side-by-side subjective visual quality using open-source tool compare-video

Extend the solution

During deployment using Cloudformation template, a parameter (VSRImage) is requested. You can use the default value or create your own docker image using Intel® Library for Video Super Resolution project as baseline. In addition you can make adjustments to ffmpeg libraries (i.e. adding x264, x265, jpeg-xs libraries). In this implementation is also included aws-cli with S3 read/write capabilities. All those changes are detailed in Dockerfile.2204](https://github.com/aws-samples/video-super-resolution-tool/edit/main/container/Dockerfile.2204).

Prerequisites

  • Ubuntu machine 22.04 to build a docker image
  • Docker already installed on Ubuntu 22.04
  • AWS ECR repository already created. Instructions can be found here

Building custom docker image

Cost

AWS Batch optimizes compute costs by paying only for used resources. Using Spot instances leverages unused EC2 capacity for significant savings over On-Demand instances. Benchmark different instance types and sizes to find the optimal workload configuration.

Clean up

To prevent unwanted charges after evaluating this solution, delete created resources by:

  1. Delete all objects in the Amazon S3 bucket used for testing. You can remove these objects from the S3 console by selecting all objects and clicking "Delete"
  2. Delete the AWS Cloudformation stack from AWS Console
  3. Verify that all resources have been removed by checking the AWS console. This ensures no resources are accidentally left running, which would lead to unexpected charges.

References

  1. Intel® Library for Video Super Resolution
  2. ffmpeg
  3. Whitepaper short-version
  4. Whitepaper extended-version
  5. AWS batch with ffmpeg

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages