Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Automatically set tiles according to number of CPUs. #1370

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

tdaede
Copy link
Collaborator

@tdaede tdaede commented Jun 20, 2019

Limit to 8 to keep quality loss acceptable.

Limit to 8 to keep quality loss acceptable.
@coveralls
Copy link
Collaborator

Coverage Status

Coverage increased (+0.08%) to 82.782% when pulling 7caa2b0 on tdaede:auto_tiles into eae72f9 on xiph:master.

@rzumer
Copy link
Collaborator

rzumer commented Jun 20, 2019

No resolution/speed guards?

@tdaede
Copy link
Collaborator Author

tdaede commented Jun 20, 2019

Not yet, I'm going to do some AWCY runs as I have no idea what reasonable guards are tbh.

@tdaede tdaede changed the title Automatically set tiles according to number of CPUs. WIP: Automatically set tiles according to number of CPUs. Jun 20, 2019
@dwbuiten
Copy link
Collaborator

👍 Waiting to see what AWCY shows. I can't image 8 tiles on a 240p stream is great.

@tdaede
Copy link
Collaborator Author

tdaede commented Jun 20, 2019

Looking at AWCY results, a default constraint of tiles no smaller than 512x512 seems like a good middle ground.

@gnafuthegreat
Copy link

I feel like a lot of people would expect the default behavior to not use tiles at all, and it might be better to have this as '--auto-tiles' or something. Since it's at least a 2-3% BD-rate hit (and more on a lot of content), it seems like something you'd want off by default.

@dwbuiten
Copy link
Collaborator

Is it really that high?

@davidscotson
Copy link

Seems fairly consistent across the test clips in that last link that the difference is greatest at lower bits-per-pixel and disappears as it rises.

The bd rate reports break out averages by resolution, (and you might be able to see resolution dependent effects just by comparing the different file rows, but this effect seems the same regardlesss of resolution, how would it look if thinsg were broken down by that metric instead?

@tdaede
Copy link
Collaborator Author

tdaede commented Jun 22, 2019

It quickly becomes resolution dependent at greater tile counts: https://beta.arewecompressedyet.com/?job=4tiles-343461dcbb671b601da34b3477532fa1ef2edf83%402019-06-20T23%3A09%3A03.664Z&job=master-343461dcbb671b601da34b3477532fa1ef2edf83

I want to treat this sorta like a speed feature, e.g. for doubling encode time by going from 2->1 tiles, there might be a better speed feature to enable instead.

@davidscotson
Copy link

Possibly I'm misreading the graphs, but I still don't see an obvious dependency on resolution even for 8 tiles. It gets worse from 360p to 720p, then better again as resolution increases to 1080p on the test set for 2,4,8 tiles (compared with one tile or a smaller number of tiles) rather than consistently worse at lower resolutions.

A more consistent dependency is on quality. Seems like whether you'd want this on by default depends more on what VMAF/quality you're targetting than the resolution of the video, as at higher qualities it seems the impact is greatly reduced and so judging by the average is perhaps misleading if you're targetting either a high or low quality. (Presumably the first few bits used are heavily affected by the split in some way, but as more and more bits are spent to increase quality it affects them less and that initial hit gets amortized across the higher bitrate as a mostly fixed cost)

@EwoutH
Copy link
Contributor

EwoutH commented Nov 10, 2019

@tdaede I would like to run some performance and bdr tests, could you resolve the conflicts and rebase this branch?

@vibhoothi
Copy link
Collaborator

@tdaede can we have this 0.4.0 release? Is it a good idea?

@barrbrain barrbrain added the WorkInProgress Incomplete patchset label Dec 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
WorkInProgress Incomplete patchset
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants