Identity Aware Proxy (IAP)
Programmatic Authentication
See the documentation. You need to include a proper JWT in the Authorization
header as a bearer token.
Unfortunately I think
gcloud auth print-identity-token --audiences=${CLIENT_ID}
doesn’t appear to work if you want to authenticate as an end user and not a service account.
Troubleshooting
Ensure the protocol loadbalancer to backend is correct
The protocol should be HTTP or HTTPS based on what the server expects
You can specify the protocol associated with the port using an annotation on the service e.g
apiVersion: v1 kind: Service metadata: annotations: # app-protocols is a mapping from the port name to protocol # We need to set the protocol to https because that's what argocd is using. # This causes traffic from the loadbalancer to backend service to be encrypted. service.alpha.kubernetes.io/app-protocols: '{"https":"HTTPS"}' spec: ports: - name: https port: 443 protocol: TCP targetPort: 8080
- The
app-protocol
annotation tells GCP that port named https is using the protocol https; this will cause the traffic from the loadbalancer to that service to be encrypted using https
- The
Backend HealthChecks Unhealthy
To figure out why healthchecks are failing turn on logging of healthcheck probes.
Refer to the HealthCheck Logging Documentation
Log names are
logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fhealthchecks" logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fhealthchecks" jsonPayload.healthCheckProbeResult.ipAddress="IP_ADDRESS"
Here are some important points
- You can turn on logging using gcloud or the UI by updating the healthcheck
- You cannot use BackendConfig to turn on HealthCheck logging
- BackendConfig supports http logging of requests which is not the same thing
- You might not to restart the pod in order to log the healthcheck
Ensure the backend healthcheck is configured correctly
Refer to the BackendConfig custom health checking configuration
Ensure the target protocol matches the healthchecking protocol (e.g. http/https/grpc)
Make sure the port matches the port on the pod when using a NEG and not the port in the service
The
cloud.google.com:neg
annotation on the Service specifies the exposed port (docs)When I set exposed_port to the service’s port and not target port I got an error
Warning ProcessServiceFailed 7m32s (x22 over 24m) neg-controller error processing service "chat/server": port 7860 specified in "cloud.google.com/neg" doesn't exist in the service
Debugging NEGs
The K8s service should get the annotation
cloud.google.com/neg-status
added to itIf you look at the NEG in the Cloud console IP addresses should correspond to Pod IP addresses
Refer to the section Troubleshooting
Even though it is for “Standalone NEGs” and not NEGs created through Gateway/Ingress resources it still has useful information
The
svcneg
K8s resource should be created for the NEG and provide information.
Make Sure Server Is Binding the right (all network devices)
A problem we’ve seen in the past is that your server is binding only the localhost network interface. As a result when you deploy it inside a pod it ends up not being responsive to other pods and services in the network.