Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrading from keycloak 20.0.1-20.0.2+ breaks app logout #16586

Closed
2 tasks done
ssill2 opened this issue Jan 23, 2023 · 10 comments
Closed
2 tasks done

Upgrading from keycloak 20.0.1-20.0.2+ breaks app logout #16586

ssill2 opened this issue Jan 23, 2023 · 10 comments
Labels
area/oidc Indicates an issue on OIDC area kind/bug Categorizes a PR related to a bug
Milestone

Comments

@ssill2
Copy link

ssill2 commented Jan 23, 2023

Before reporting an issue

  • I have searched existing issues
  • I have reproduced the issue with the latest release

Area

oidc

Describe the bug

I've been using 20.0.0 for a while now in my widlfly26 based application using wildfly's oidc client. Today I, in my local minikube environment, I upgraded my keycloak image to be based on 20.0.3 and then built and deployed. Everything worked fine except for logout. When I click logout button in the UI, I get the following. Under the covers, the app uses the httpservlet logout method.
image

To make sure I wasn't crazy, I went back to 20.0.0, and it worked fine. To isolate where this broke, I changed my keycloak docker build to be based on 20.0.1. That worked. Next I tried 20.0.2, the problem returned. So something to do with logout changed from 20.0.1->20.0.2
I started to diff between the 20.0.1 and 20.0.2 tags and there are definitely changes relating to logout.

I just got done testing turning on the following options in a bid to work around the issue. I have not had to use these before.

  • --spi-login-protocol-openid-connect-legacy-logout-redirect-uri=true
  • --spi-login-protocol-openid-connect-suppress-logout-confirmation-screen=true

These didn't change any behavior for me.

When I get the 502 show above, this is the url in the browser
http://auth.local/realms/ISPSS/protocol/openid-connect/auth?response_type=code&client_id=dev-guardian&redirect_uri=http%3A%2F%2Fguardian.local%2Fguardian&state=216ff5e2-edb8-4469-b675-29ca2912c50d&scope=openid

Once this starts happening, it requires I close the browser completely and open it again before I can log in to the application again. I've been reviewing migration guide and I thought perhaps it might be related to id_token_hint and post_logout_redirect_uri parameters
I have an upgrade from wildfly 26 to 27 for my app slated to start soon, so I might see if upgrade to wildfly 27(and it's probably newer oidc client) fixes the issue.

I was surprised that a small point release change broke this. I was more expecting something like 20.x to 21.x to do that.

Version

20.0.3

Expected behavior

Logging out should return the user to login page the user would get as if logging in the first time.

Actual behavior

I get a 502 page served up the the kubernetes(minikube) ingress instead of the login page. Once this happens it requires closing the browser to clear the session and be able to log in again.

How to Reproduce?

Configure a simple JEE web app deployed in wildfly 26.1.3 that is configured to use OIDC.
When I first set things up, I was on keycloak 17 and wildfly 26. I've been upgrading keycloak since then.

This guide is what I used to configure keycloak auth in my app initially.
https://wildfly-security.github.io/wildfly-elytron/blog/securing-wildfly-apps-openid-connect/

Anything else?

No response

@ssill2 ssill2 added kind/bug Categorizes a PR related to a bug status/triage labels Jan 23, 2023
@ghost ghost added the area/oidc Indicates an issue on OIDC area label Jan 23, 2023
@ssill2
Copy link
Author

ssill2 commented Jan 23, 2023

Also, there are no errors in the keycloak or wildfly container logs indicating a problem.

@Persi
Copy link

Persi commented Jan 25, 2023

We had the same issue as well. It seems to be a proxy buffer problem in nginx, header sent by Keycloak exceed the default buffer size, which causes nginx to let the request fail with 502.

After adding the following settings to our server block in nginx config, the keycloak requests worked as before:

proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;

@ssill2
Copy link
Author

ssill2 commented Jan 25, 2023

interesting. I always use minikube as a way to test before I deploy in GKE. I guess I'll have to deploy in my staging env on GKE to see if anything needs to be done there. I'll give this a try locally by trying to patch the nginx-ingress-controller with the settings. Thanks!

@ssill2
Copy link
Author

ssill2 commented Jan 25, 2023

How did you apply the settings.

I tried adding your settings to my existing annotation in the ingress which I added to turn on sticky sessions for vaadin.

inginx.ingress.kubernetes.io/configuration-snippet:
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;

it doesn't seem to fix the 502 with 20.0.2 or 20.0.3

Thanks,
Stephen

@Persi
Copy link

Persi commented Jan 25, 2023

Hi @ssill2,

we applied this configuration directly in the nginx configuration file, as in our case it was a plain local installation, no kubernetes ingress.

But for Kubernetes ingress my guess would be as described here, using Ingress annotations nginx.org/proxy-buffers and nginx.org/proxy-buffer-size or the mentioned config map keys proxy-buffers and proxy-buffer-size should do the job.

Seems like ingress does not support configuration of proxy_busy_buffer_size, I guess it could work either.

Greetings,
Marcus

@ssill2
Copy link
Author

ssill2 commented Jan 25, 2023

@Persi

I actually tried the specific annotations first, before adding them to the snippet annotation. It didn't seem to solve the issue either. I found exactly the same thing that there was no specific annotation for the busy buffer size. I'll give it another try though. I may just have to try to upgrade our staging keycloak, which is in GCP/GKE and uses the google ingress, not the nginx-ingress that minikube does.

I'll let you know how it turns out.

Thanks for the response!,
Stephen

Hi @ssill2,

we applied this configuration directly in the nginx configuration file, as in our case it was a plain local installation, no kubernetes ingress.

But for Kubernetes ingress my guess would be as described here, using Ingress annotations nginx.org/proxy-buffers and nginx.org/proxy-buffer-size or the mentioned config map keys proxy-buffers and proxy-buffer-size should do the job.

Seems like ingress does not support configuration of proxy_busy_buffer_size, I guess it could work either.

Greetings, Marcus

@ssill2
Copy link
Author

ssill2 commented Jan 25, 2023

So I tried the following annotation configuration in my development ingress in minikube. The behavior didn't change.

  annotations:
    kubernetes.io/ingress.allow-http: "true"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/affinity-mode: "persistent"
    nginx.org/proxy-buffers: "4 256k"
    nginx.org/proxy-buffer-size: "128k"
    inginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header X-Forwarded-For $proxy_protocol_addr;
      proxy_set_header X-Forwarded-Proto  $scheme;
      proxy_set_header Host $host;
      proxy_busy_buffers_size 256k;

I was really hopeful that your settings would just fix the problem lol.

The problem with everything being in kubernetes is I don't have the tools like tcpdump to look at what's actually happening in the http conversations between my app's pod and the keycloak pod. But I find it fascinating that .1->.2 made all the difference though.

@Persi
Copy link

Persi commented Jan 25, 2023

Ah probably you have to use the standard kubernetes Ingress annotations, so instead of nginx.org/proxy-buffer-size try to use nginx.ingress.kubernetes.io/proxy-buffer-size and instead of nginx.org/proxy-buffers try nginx.ingress.kubernetes.io/proxy-buffers-number.

@ssill2
Copy link
Author

ssill2 commented Jan 25, 2023

ah cool, I'll give that a try here in a bit :) Thanks!

@ssill2
Copy link
Author

ssill2 commented Jan 25, 2023

hey! I can confirm this fixed it!

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dev-apps-ingress
  namespace: {{ .Values.namespace }}
  annotations:
    kubernetes.io/ingress.allow-http: "true"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/affinity-mode: "persistent"
    nginx.ingress.kubernetes.io/proxy-buffers: "4 256k"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
    inginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header X-Forwarded-For $proxy_protocol_addr;
      proxy_set_header X-Forwarded-Proto  $scheme;
      proxy_set_header Host $host;
      proxy_busy_buffers_size 256k;

changing those annotations to be nginx.ingress.kubernetes.io/* made them take effect. the 502 problem went away!

Thanks Marcus! 😄

@ssill2 ssill2 closed this as completed Jan 25, 2023
@ghost ghost removed the status/triage label Jan 25, 2023
@stianst stianst added this to the 21.0.0 milestone Feb 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/oidc Indicates an issue on OIDC area kind/bug Categorizes a PR related to a bug
Projects
None yet
Development

No branches or pull requests

3 participants