Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Postgres: use pg_try_advisory_lock instead of pg_advisory_lock #962

Open
wants to merge 19 commits into
base: master
Choose a base branch
from

Conversation

AkuSilvenius
Copy link

@AkuSilvenius AkuSilvenius commented Jul 26, 2023

Issue

#960

Changes

  • Use pg_try_advisory_lock instead of pg_advisory_lock in Postgres Lock to fix the expected behavior of waiting for acquiring lock indefinitely
  • Add configurable exponential backoff for retrying when lock is not available.
  • Add config parameter x-lock-retry-max-interval to configure the max interval to be used for exponential backoff when retrying Lock

@coveralls
Copy link

coveralls commented Jan 23, 2024

Coverage Status

coverage: 56.463% (+0.1%) from 56.335%
when pulling 21469a7 on AkuSilvenius:960_add_try_advisory_lock
into 0c456c4 on golang-migrate:master.

@AkuSilvenius AkuSilvenius marked this pull request as ready for review January 23, 2024 10:39
Copy link
Member

@dhui dhui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left a comment in case we decide to go this route

break
}

time.Sleep(100 * time.Millisecond)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use the backoff library with an exponential backoff + jitter as the default. Note, this may cause other nodes/hosts to take longer to deploy due to the longer wait period so the backoff should be configurable.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dhui thanks for the feedback, I've added a configurable exponential retry.

this may cause other nodes/hosts to take longer to deploy due to the longer wait period

I've added defaults and kept them quite small to address the above

@AkuSilvenius
Copy link
Author

unfortunately the tests keep failing because device is running out of space

@eelco
Copy link

eelco commented Apr 18, 2024

I assume the tests were temporarily broken, so this might work after a rebase? I’m interested in this fix landing 😊

@AkuSilvenius
Copy link
Author

I assume the tests were temporarily broken, so this might work after a rebase? I’m interested in this fix landing 😊

@eelco thanks for noting, looks like #1072 fixed the tests and pipeline here in the PR is working and passing now - unfortunately this PR got stuck in sleep mode for quite a while due to the unstable pipeline, I hope we can merge this soon :)

@simonjpartridge
Copy link

I assume the tests were temporarily broken, so this might work after a rebase? I’m interested in this fix landing 😊

@eelco thanks for noting, looks like #1072 fixed the tests and pipeline here in the PR is working and passing now - unfortunately this PR got stuck in sleep mode for quite a while due to the unstable pipeline, I hope we can merge this soon :)

Is there anything we can do to help get this PR unblocked and merged?

@AkuSilvenius
Copy link
Author

@simonjpartridge all tests are passing now, so just approvals are needed I guess :)

@AkuSilvenius
Copy link
Author

anyone of the recent participants here in the PR willing to give it a review as the pipeline is stable now, and approve if it looks good so we get this one merged and the issue closed? @dhui @eelco @simonjpartridge

Copy link

@simonjpartridge simonjpartridge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall, one comment that should be actioned IMO around setting of the max retry interval

Comment on lines +230 to 232
if maxRetryInterval > DefaultLockInitialRetryInterval {
lockConfig.MaxRetryInterval = maxRetryInterval
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This if statement means a user can't set the max retry interval to less than the default. I can't see a good reason to limit it in this way so why not do this?

Suggested change
if maxRetryInterval > DefaultLockInitialRetryInterval {
lockConfig.MaxRetryInterval = maxRetryInterval
}
if maxRetryInterval >= 0 {
lockConfig.MaxRetryInterval = maxRetryInterval
}

Copy link
Author

@AkuSilvenius AkuSilvenius Jul 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the suggestion - the retry strategy here is to use exponential backoff, meaning if we end up retrying, the later retries should wait for longer than the previous ones. If we'd allow the user to set maxRetryInterval to less than InitialRetryInterval then the retry interval wouldn't increase which is against the idea of exponential backoff

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

however your comment raises a valid point that the user might not be aware of this initial retry interval and could try setting it to less than that without knowing it would be ignored - I've added a note about this to the readme be32ada

@@ -628,6 +629,50 @@ func testParallelSchema(t *testing.T) {
})
}

func testPostgresConcurrentMigrations(t *testing.T) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ran these tests against master and can confirm they fail 🎉. This change makes the tests pass

@AkuSilvenius
Copy link
Author

Looks like there has been some updates on the pipeline as the tests run without issues now. Maybe we could finally get this one merged? 😁 pinging reviewers @dhui @eelco @simonjpartridge

@eelco
Copy link

eelco commented Dec 3, 2024

@AkuSilvenius I would approve but I don’t have write access in this repo, so it would not matter…

Copy link

@simonjpartridge simonjpartridge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice work! Unfortunately I don't think I have write access either. @dhui is the reviewer we need

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants