-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Postgres: use pg_try_advisory_lock
instead of pg_advisory_lock
#962
base: master
Are you sure you want to change the base?
Postgres: use pg_try_advisory_lock
instead of pg_advisory_lock
#962
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
left a comment in case we decide to go this route
database/postgres/postgres.go
Outdated
break | ||
} | ||
|
||
time.Sleep(100 * time.Millisecond) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use the backoff library with an exponential backoff + jitter as the default. Note, this may cause other nodes/hosts to take longer to deploy due to the longer wait period so the backoff should be configurable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dhui thanks for the feedback, I've added a configurable exponential retry.
this may cause other nodes/hosts to take longer to deploy due to the longer wait period
I've added defaults and kept them quite small to address the above
unfortunately the tests keep failing because device is running out of space |
I assume the tests were temporarily broken, so this might work after a rebase? I’m interested in this fix landing 😊 |
@eelco thanks for noting, looks like #1072 fixed the tests and pipeline here in the PR is working and passing now - unfortunately this PR got stuck in sleep mode for quite a while due to the unstable pipeline, I hope we can merge this soon :) |
Is there anything we can do to help get this PR unblocked and merged? |
@simonjpartridge all tests are passing now, so just approvals are needed I guess :) |
anyone of the recent participants here in the PR willing to give it a review as the pipeline is stable now, and approve if it looks good so we get this one merged and the issue closed? @dhui @eelco @simonjpartridge |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall, one comment that should be actioned IMO around setting of the max retry interval
if maxRetryInterval > DefaultLockInitialRetryInterval { | ||
lockConfig.MaxRetryInterval = maxRetryInterval | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This if statement means a user can't set the max retry interval to less than the default. I can't see a good reason to limit it in this way so why not do this?
if maxRetryInterval > DefaultLockInitialRetryInterval { | |
lockConfig.MaxRetryInterval = maxRetryInterval | |
} | |
if maxRetryInterval >= 0 { | |
lockConfig.MaxRetryInterval = maxRetryInterval | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the suggestion - the retry strategy here is to use exponential backoff, meaning if we end up retrying, the later retries should wait for longer than the previous ones. If we'd allow the user to set maxRetryInterval
to less than InitialRetryInterval
then the retry interval wouldn't increase which is against the idea of exponential backoff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
however your comment raises a valid point that the user might not be aware of this initial retry interval and could try setting it to less than that without knowing it would be ignored - I've added a note about this to the readme be32ada
@@ -628,6 +629,50 @@ func testParallelSchema(t *testing.T) { | |||
}) | |||
} | |||
|
|||
func testPostgresConcurrentMigrations(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ran these tests against master and can confirm they fail 🎉. This change makes the tests pass
Looks like there has been some updates on the pipeline as the tests run without issues now. Maybe we could finally get this one merged? 😁 pinging reviewers @dhui @eelco @simonjpartridge |
@AkuSilvenius I would approve but I don’t have write access in this repo, so it would not matter… |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice work! Unfortunately I don't think I have write access either. @dhui is the reviewer we need
Issue
#960
Changes
pg_try_advisory_lock
instead ofpg_advisory_lock
in PostgresLock
to fix the expected behavior of waiting for acquiring lock indefinitelyx-lock-retry-max-interval
to configure the max interval to be used for exponential backoff when retryingLock