Hi folks,
In public cloud environments or Kubernetes environments, PTR records are difficult or impossible for administrators to set. We increasingly have to tell users to set "rdns = fallback" or "rdns = false". I'm wondering what the original purpose of Kerberos' rdns feature was. Why would a client want or need to do hostname canonicalization? I'm also wondering if we will ever be able to default MIT Kerberos' rdns setting to "fallback" or "false" in a future version. IMHO this would make it easier to deploy Kerberos applications in modern hosting environments. - Ken ________________________________________________ Kerberos mailing list [hidden email] https://mailman.mit.edu/mailman/listinfo/kerberos |
On Tue, 2020-05-26 at 15:09 -0600, Ken Dreyer wrote:
> Hi folks, > > In public cloud environments or Kubernetes environments, PTR records > are difficult or impossible for administrators to set. We increasingly > have to tell users to set "rdns = fallback" or "rdns = false". > > I'm wondering what the original purpose of Kerberos' rdns feature was. > Why would a client want or need to do hostname canonicalization? > > I'm also wondering if we will ever be able to default MIT Kerberos' > rdns setting to "fallback" or "false" in a future version. IMHO this > would make it easier to deploy Kerberos applications in modern hosting > environments. FWIW in RHEL and Fedora we set rdns = false by default since 2013, and we are now also setting dns_canonicalize_hostname to fallback by default. Simo. -- Simo Sorce RHEL Crypto Team Red Hat, Inc ________________________________________________ Kerberos mailing list [hidden email] https://mailman.mit.edu/mailman/listinfo/kerberos |
In reply to this post by Ken Dreyer-2
On 5/26/20 5:09 PM, Ken Dreyer wrote:
> In public cloud environments or Kubernetes environments, PTR records > are difficult or impossible for administrators to set. We increasingly > have to tell users to set "rdns = fallback" or "rdns = false". Note that dns_canonicalize_hostname and rdns are separate settings. dns_canonicalize_hostname supports "fallback", but rdns only supports true or false (and only takes effect when DNS canonicalization happens). If a PTR record is not set at all, the library should by default use the forward canonicalization result. The problem happens when there is a PTR record, but it has a non-useful value. > I'm wondering what the original purpose of Kerberos' rdns feature was. > Why would a client want or need to do hostname canonicalization? Forward DNS canonicalization was a convenience for cnames and non-qualified hostnames. We now have a qualify_shortname feature to address single-label names by adding a domain suffix, but it only appeared in 1.18. The additional reverse canonicalization step was, to the best of my hazy understanding, aimed specifically at a historical element of the Athena computing environment at MIT, most likely a pool of dialups which load-balanced via A record. We know that the rdns=true default is an inconvenience for many environments. > I'm also wondering if we will ever be able to default MIT Kerberos' > rdns setting to "fallback" or "false" in a future version. IMHO this > would make it easier to deploy Kerberos applications in modern hosting > environments. I floated the idea of changing the rdns default to false some years ago, and got the sense that it would be traumatic to a number of existing deployments. Client library upgrades are generally not intentional, so warning people via release notes doesn't really help. Changing the default for dns_canonicalize_hostname to "fallback" would be less likely to be traumatic. Fedora is starting to put dns_canonicalize_hostname=fallback in their shipped default krb5.conf. (They have put rdns=false in this file since 2013.) If there isn't much fallout, we might change the library default in 1.19. That change wouldn't help when the forward-but-not-reverse-canonicalization result is best, but it does help if the originally entered name or its shortname-qualified version is correct. We had a design floating around for a protocol extension where the KDC could set a "please don't canonicalize" ticket flag. Unfortunately no progress has been made on this idea. ________________________________________________ Kerberos mailing list [hidden email] https://mailman.mit.edu/mailman/listinfo/kerberos |
In reply to this post by Ken Dreyer-2
On 5/26/2020 5:09 PM, Ken Dreyer wrote:
> Hi folks, > > In public cloud environments or Kubernetes environments, PTR records > are difficult or impossible for administrators to set. We increasingly > have to tell users to set "rdns = fallback" or "rdns = false". As described in RFC4120 Section 1.3 https://tools.ietf.org/html/rfc4120#section-1.3 Kerberos implementations "MUST NOT use insecure DNS queries to canonicalize the hostname components of the service principal names." That said MIT and Heimdal have canonicalized hostnames using insecure DNS since the beginning of time and changing the defaults will be sure to break authentication for some unknown number of sites. > I'm wondering what the original purpose of Kerberos' rdns feature was. > Why would a client want or need to do hostname canonicalization? There are two reasons that scream at me: 1. Before the introduction of Kerberos Referrals by Microsoft (and later standardized and adopted by MIT, Heimdal, ...), the clients required the PTR name in order to determine the true "domain" for host domain to realm mapping. With Kerberos referrals it is best if the Kerberos client sends the initial service ticket request to a KDC in the client principal's realm and allow the KDC to refer the client to the first cross-realm hop if required. There are still too many systems that have client-side domain_realm mapping data that would break if "rdns" was turned off. 2. Before the existence of DNS SRV records, CNAME records were the only method of offering a service on multiple hosts. However, its a poor idea to share the same key across all of the hosts. In order to identify the name of the host that was contacted the DNS PTR record is used. Even with the existence of SRV records, too few application protocols use them. Even for services that are hosted on a system system, CNAME records are convenient to permit migration of services from an old machine to a new one. Again, disabling "rdns" by default will break an unknown number of application clients. > I'm also wondering if we will ever be able to default MIT Kerberos' > rdns setting to "fallback" or "false" in a future version. IMHO this > would make it easier to deploy Kerberos applications in modern hosting > environments. I'm unaware of any OS distribution that ships with Kerberos that doesn't provide some default equivalent of "/etc/krb5.conf". Those distributions can of course add whatever default settings it wants with appropriate documentation. If a distribution ships default krb5.conf with "rdns = false", then an end user that replaces the default krb5.conf with their organization's krb5.conf will not be broken. If the hard coded default is changed, then installing the organization's krb5.conf might not work as intended. Jeffrey Altman ________________________________________________ Kerberos mailing list [hidden email] https://mailman.mit.edu/mailman/listinfo/kerberos |
In reply to this post by Greg Hudson
On Tue, May 26, 2020 at 3:56 PM Greg Hudson <[hidden email]> wrote:
> On 5/26/20 5:09 PM, Ken Dreyer wrote: > > In public cloud environments or Kubernetes environments, PTR records > > are difficult or impossible for administrators to set. We increasingly > > have to tell users to set "rdns = fallback" or "rdns = false". > > Note that dns_canonicalize_hostname and rdns are separate settings. > dns_canonicalize_hostname supports "fallback", but rdns only supports > true or false (and only takes effect when DNS canonicalization happens). My bad, you're right. I meant dns_canonicalize_hostname=fallback. I've found some public cloud providers with some very weird PTR records for IP addresses that they hand out. These records are worse than NXDOMAIN, and I was confused to see these in my logs. - Ken ________________________________________________ Kerberos mailing list [hidden email] https://mailman.mit.edu/mailman/listinfo/kerberos |
In reply to this post by Jeffrey Altman-2
On Tue, May 26, 2020 at 3:58 PM Jeffrey Altman
<[hidden email]> wrote: > > 2. Before the existence of DNS SRV records, CNAME records were the > only method of offering a service on multiple hosts. However, > its a poor idea to share the same key across all of the hosts. I'm curious about this. What makes it a poor idea? It seems like a very convenient way to scale a service up and down dynamically quickly when you share a key among all instances. > Again, disabling "rdns" by default will break an unknown number > of application clients. Sure. My point is that it breaks the other way for modern architectures where PTR records will never be under an application developer's control. With Kubernetes a service can appear to clients to move IPs very quickly. I'm not defending Kubernetes or anything here, I'm wildly speculating that maybe breaking with the past is a good idea as more applications and developers move in this direction. - Ken ________________________________________________ Kerberos mailing list [hidden email] https://mailman.mit.edu/mailman/listinfo/kerberos |
On 5/26/2020 6:31 PM, Ken Dreyer wrote:
> On Tue, May 26, 2020 at 3:58 PM Jeffrey Altman > <[hidden email]> wrote: >> >> 2. Before the existence of DNS SRV records, CNAME records were the >> only method of offering a service on multiple hosts. However, >> its a poor idea to share the same key across all of the hosts. > > I'm curious about this. What makes it a poor idea? > > It seems like a very convenient way to scale a service up and down > dynamically quickly when you share a key among all instances. of the hosts. The holder of the key can forge tickets for any user. Since the key isn't unique the entire distributed service has to be shutdown to address the vulnerability. It is also much harder to trace where the key was stolen from. There are scalable approaches to deriving unique keys for Kubernetes but they aren't pertinent to this thread. >> Again, disabling "rdns" by default will break an unknown number >> of application clients. > > Sure. My point is that it breaks the other way for modern > architectures where PTR records will never be under an application > developer's control. With Kubernetes a service can appear to clients > to move IPs very quickly. I'm not defending Kubernetes or anything > here, I'm wildly speculating that maybe breaking with the past is a > good idea as more applications and developers move in this direction. My point is that Kubernetes is new, and new deployments can add the appropriate keys to their default configurations as Red Hat already does on Fedora and Enterprise Linux. If you change the hard coded default, then the existing deployed installations that are relying on that default will silently break. Since the breakage is on the client side that is being altered without knowledge of the service administrators, the administrators cannot fix it. ________________________________________________ Kerberos mailing list [hidden email] https://mailman.mit.edu/mailman/listinfo/kerberos |
On Tue, May 26 2020 at 18:59:23 -0400, Jeffrey Altman scribbled
in "Re: rdns, past and future": > On 5/26/2020 6:31 PM, Ken Dreyer wrote: > > On Tue, May 26, 2020 at 3:58 PM Jeffrey Altman > > <[hidden email]> wrote: > >> > >> 2. Before the existence of DNS SRV records, CNAME records were the > >> only method of offering a service on multiple hosts. However, > >> its a poor idea to share the same key across all of the hosts. > > > > I'm curious about this. What makes it a poor idea? > > > > It seems like a very convenient way to scale a service up and down > > dynamically quickly when you share a key among all instances. > > Because if you hack into one of the hosts you now have the key for > all of the hosts. The holder of the key can forge tickets for any > user. Since the key isn't unique the entire distributed service has > to be shutdown to address the vulnerability. It is also much harder > to trace where the key was stolen from. Also, as another simpler example, it can make key management more involved, rather than more convenient: Moving and sharing sensitive material around is awkward, but running `ktadd` on a new cluster member is trivial -- but if you're using a shared key across all cluster members, you've broken them all except the newest member (as `ktadd` does an implicit randkey). I've seen too many fresh sysadmins break things that way... Cheers. Dameon. -- ><> ><> ><> ><> ><> ><> ooOoo <>< <>< <>< <>< <>< <>< Dr. Dameon Wagner, Unix Platform Services IT Services, University of Oxford ><> ><> ><> ><> ><> ><> ooOoo <>< <>< <>< <>< <>< <>< ________________________________________________ Kerberos mailing list [hidden email] https://mailman.mit.edu/mailman/listinfo/kerberos |
In reply to this post by Jeffrey Altman-2
On Tue, May 26, 2020 at 4:59 PM Jeffrey Altman
<[hidden email]> wrote: > > On 5/26/2020 6:31 PM, Ken Dreyer wrote: > > On Tue, May 26, 2020 at 3:58 PM Jeffrey Altman > > <[hidden email]> wrote: > >> > >> 2. Before the existence of DNS SRV records, CNAME records were the > >> only method of offering a service on multiple hosts. However, > >> its a poor idea to share the same key across all of the hosts. > > > > I'm curious about this. What makes it a poor idea? > > > > It seems like a very convenient way to scale a service up and down > > dynamically quickly when you share a key among all instances. > > Because if you hack into one of the hosts you now have the key for all > of the hosts. The holder of the key can forge tickets for any user. This is true only if the administrator has enabled constrained delegation for that key (eg. ok_to_auth_as_delegate) right? Is there some other scenario I'm missing? > Since the key isn't unique the entire distributed service has to be > shutdown to address the vulnerability. Ok, that makes sense. I was thinking of a homogeneous environment where each app server runs the exact same versions of code, so an attacker entry through a vulnerability on one system means that all systems almost certainly have the same vulnerability. > It is also much harder to trace where the key was stolen from. Yeah, that's fair. - Ken ________________________________________________ Kerberos mailing list [hidden email] https://mailman.mit.edu/mailman/listinfo/kerberos |
On Wed, 2020-05-27 at 11:59 -0600, Ken Dreyer wrote:
> On Tue, May 26, 2020 at 4:59 PM Jeffrey Altman > <[hidden email]> wrote: > > On 5/26/2020 6:31 PM, Ken Dreyer wrote: > > > On Tue, May 26, 2020 at 3:58 PM Jeffrey Altman > > > <[hidden email]> wrote: > > > > 2. Before the existence of DNS SRV records, CNAME records were the > > > > only method of offering a service on multiple hosts. However, > > > > its a poor idea to share the same key across all of the hosts. > > > > > > I'm curious about this. What makes it a poor idea? > > > > > > It seems like a very convenient way to scale a service up and down > > > dynamically quickly when you share a key among all instances. > > > > Because if you hack into one of the hosts you now have the key for all > > of the hosts. The holder of the key can forge tickets for any user. > > This is true only if the administrator has enabled constrained > delegation for that key (eg. ok_to_auth_as_delegate) right? Is there > some other scenario I'm missing? If you own a service key, you can forge a ticket from any user to yourself without any issue. This of course is not an issue, as there is no point in breaking into yourself. If multiple services use the same key then you can forge tickets by any user to any those services if you stole the common key. This is now a bad thing because you can jump from one system to another at this point. That said, in the kubernetes case, the multiple services are *not* actually distinct services, they are generally a single service implemented by multiple containers for scaling purposes. For all intent and purposes in a kubernets environment sharing the key is not as disastrous. Once you break into one container you already broke into that service layer as a whole as you usually get access to other shared keys (database access for example). > > Since the key isn't unique the entire distributed service has to be > > shutdown to address the vulnerability. > > Ok, that makes sense. I was thinking of a homogeneous environment > where each app server runs the exact same versions of code, so an > attacker entry through a vulnerability on one system means that all > systems almost certainly have the same vulnerability. Exactly, in the special case where sharing happens within the confines of the same security domain essentially for scaling reasons, you can definitely make the choice of sharing keys. > > It is also much harder to trace where the key was stolen from. > > Yeah, that's fair. In kubernetes you usually have better telemetry than classic systems, and is normally remoted from the stateless container, which means it is much harder to alter to cover tracks, so perhaps this concern/risk can also be better managed there, if you care for it. Simo. -- Simo Sorce RHEL Crypto Team Red Hat, Inc ________________________________________________ Kerberos mailing list [hidden email] https://mailman.mit.edu/mailman/listinfo/kerberos |
Free forum by Nabble | Edit this page |