Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix helm linting error for coredns.serviceType #615

Merged
merged 1 commit into from
Sep 13, 2021
Merged

Conversation

somaritane
Copy link
Contributor

@somaritane somaritane commented Sep 10, 2021

This PR fixes helm linting error for coredns.serviceType value when k8gb chart dependency (coredns) is not yet updated:

[ERROR] templates/: template: k8gb/templates/operator.yaml:116:18: executing "k8gb/templates/operator.yaml" at <eq "LoadBalancer" .Values.coredns.serviceType>: error calling eq: incompatible types for comparison

It also fixes kube-linter security context errors in the service-delete-hook helm template
Signed-off-by: Timofey Ilinykh [email protected]

kuritka
kuritka previously approved these changes Sep 10, 2021
@k0da
Copy link
Collaborator

k0da commented Sep 10, 2021

This is weird, serviceType is available within coredns for ages: coredns/helm@764e58c

@somaritane
Copy link
Contributor Author

This is weird, serviceType is available within coredns for ages: coredns/helm@764e58c

This happens when coredns dependency chart is not yet downloaded via helm dependency update
It also happens when k8gb chart is being installed with the https://fleet.rancher.io/.
So I though it would be safe to make helm lint working yet before the dependency chart is downloaded.

@somaritane
Copy link
Contributor Author

Had to fix kube-linter security context errors in the service-delete-hook helm template as well.

Copy link
Member

@ytsarev ytsarev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Do we know why kube-linter started to detect it only recently? the hook is in master for a while

@somaritane
Copy link
Contributor Author

Do we know why kube-linter started to detect it only recently? the hook is in master for a while

I was wondering about the same thing.
The kube-linter action monitors only the chart directory.
https://github.com/k8gb-io/k8gb/blob/master/.github/workflows/kube-linter.yaml#L28

It passed without any errors on #614,
however with the following warning:

Warning: no valid objects found.

https://github.com/k8gb-io/k8gb/runs/3553924913?check_suite_focus=true#step:3:20

This warning is usually displayed when you point kube-linter to the directory without helm templates or kube manifests:

❯ kube-linter lint docs 
Warning: no valid objects found.

If the directory is valid and you have no issues, the message should be like this one:

❯ kube-linter lint chart 
KubeLinter 0.2.3

No lint errors found!

Instead we only had the mentioned warning in the action logs until this time.

@ytsarev
Copy link
Member

ytsarev commented Sep 13, 2021

That is interesting, I'v reproduced the kube-linter issue in master

> kube-linter lint -v chart/k8gb/
Warning: failed to load object from chart/k8gb/: failed to render: template: k8gb/templates/operator.yaml:116:18: executing "k8gb/templates/operator.yaml" at <eq "LoadBalancer" .Values.coredns.serviceType>: error calling eq: incompatible types for comparison
Warning: no valid objects found.
> echo $?
0

@ytsarev
Copy link
Member

ytsarev commented Sep 13, 2021

Also confirming that this PR will solve terrascan scanning issues, great catch @somaritane

@ytsarev
Copy link
Member

ytsarev commented Sep 13, 2021

stackrox/kube-linter#209 i've sent PR upstream

@somaritane somaritane merged commit 0fe6267 into master Sep 13, 2021
@somaritane somaritane deleted the fix-helm-linting branch September 13, 2021 12:15
@jkremser
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants