KDC query client performance

classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

KDC query client performance

Greg Hudson
We've been looking into some cases where MIT krb5 imposes unreasonable
performance penalties on scenarios where krb5 doesn't even wind up
getting used.  For instance, in one scenario, turning on ssh's
GSSAPIKeyExchange feature caused 96 DNS requests and 12 KDC requests
to conclude that there was no krb5 support for a target host on a
local network, for a delay of about four seconds.

As a first step, I've restructured the locate/sendto code so that we
don't resolve hostnames until we need them.  (I haven't yet extended
the KDC location module to be able to take advantage of this support.)

Some other steps we'd like to consider:

1. Turn off the realm walk on the client by default.  This is the
logic where the client assumes that (a) cross-realm key sharing is
most likely to be arranged along the domain hierarchy of realms, and
(b) the local KDC is only smart enough to return a cross-tgt for the
realm we ask for, not for an intermediate realm.  The second
assumption is no longer likely to be true; for quite a long time now,
KDCs have been smart enough to perform the realm walk internally and
respond with a TGT referral.  The down side of the realm walk is that
we commonly make three or more KDC queries to determine that a guessed
target realm doesn't exist within the local realm's federation.

It would actually be nice to eliminate this support entirely, as it's
a big source of complexity in the TGS request code.  But a more
conservative first step is to turn it off and allow it to be turned
back on.

2. Speeding up the client retry loop, so that it doesn't take as long
to time out when you're behind a firewall which black-holes port 88.
Currently we wait one second per UDP address per pass (and per TCP
address on the first pass), and also wait 2s/4s/8s/16s (or 30s in
total) at the end of each pass.

In order to be nice to KDC load, I think it's still prudent to wait
one second per server address on the first pass.  After that we're
mostly trying to be nice to the network, and networks have gotten much
faster.  So I think once we reach the end of the first pass, we ought
to speed everything up by a factor of ten--that is, wait only 100ms
between UDP queries on the second and later passes, and wait
200ms/400ms/800ms/1600ms at the end of passes.

3. Eliminate the second default UDP port (750) when parsing profile
kdc entries.  When a KDC is inaccessible, this causes extra delays,
and also extra DNS requests due to the way the code is structured.  We
have always restricted the second default port to UDP over IPv4,
likely because it was intended as a krb4 transition measure.

Unfortunately, this change is likely to break a handful of deployments
which happen to serve KDC requests only on port 750 and win because
they only need it to work over IPv4 UDP (and don't have any Heimdal
clients, or configure their Heimdal clients to use port 750
explicitly).  I'm not sure if it's worth not breaking these
environments at the cost of extra delays in more common cases.
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Henry B. Hotz

On Feb 14, 2011, at 9:27 AM, [hidden email] wrote:

> Message: 1
> Date: Sun, 13 Feb 2011 16:52:24 -0500 (EST)
> From: [hidden email]
> Subject: KDC query client performance
> To: [hidden email]
> Message-ID: <[hidden email]>
>
> We've been looking into some cases where MIT krb5 imposes unreasonable
> performance penalties on scenarios where krb5 doesn't even wind up
> getting used.  For instance, in one scenario, turning on ssh's
> GSSAPIKeyExchange feature caused 96 DNS requests and 12 KDC requests
> to conclude that there was no krb5 support for a target host on a
> local network, for a delay of about four seconds.
>
> As a first step, I've restructured the locate/sendto code so that we
> don't resolve hostnames until we need them.  (I haven't yet extended
> the KDC location module to be able to take advantage of this support.)
>
> Some other steps we'd like to consider:
>
> 1. Turn off the realm walk on the client by default.  This is the
> logic where the client assumes that (a) cross-realm key sharing is
> most likely to be arranged along the domain hierarchy of realms, and
> (b) the local KDC is only smart enough to return a cross-tgt for the
> realm we ask for, not for an intermediate realm.  The second
> assumption is no longer likely to be true; for quite a long time now,
> KDCs have been smart enough to perform the realm walk internally and
> respond with a TGT referral.  The down side of the realm walk is that
> we commonly make three or more KDC queries to determine that a guessed
> target realm doesn't exist within the local realm's federation.
>
> It would actually be nice to eliminate this support entirely, as it's
> a big source of complexity in the TGS request code.  But a more
> conservative first step is to turn it off and allow it to be turned
> back on.

Agree with the eventual goal.  Maybe it's just me, but I'm not yet comfortable with depending on referrals instead of the traditional realm/domain-walk.  Wouldn't want it turned off by default until Solaris 10 clients support referral tickets by default (which I haven't checked).

> 2. Speeding up the client retry loop, so that it doesn't take as long
> to time out when you're behind a firewall which black-holes port 88.
> Currently we wait one second per UDP address per pass (and per TCP
> address on the first pass), and also wait 2s/4s/8s/16s (or 30s in
> total) at the end of each pass.
>
> In order to be nice to KDC load, I think it's still prudent to wait
> one second per server address on the first pass.  After that we're
> mostly trying to be nice to the network, and networks have gotten much
> faster.  So I think once we reach the end of the first pass, we ought
> to speed everything up by a factor of ten--that is, wait only 100ms
> between UDP queries on the second and later passes, and wait
> 200ms/400ms/800ms/1600ms at the end of passes.

Personally, I'd rather you just eliminate the last pass (or two?).  I think what's important is that you try all the possibilities, you try them more than once, and you don't shorten the 1-second response-time requirement.  Beyond that it's kind of a matter of opinion.

> 3. Eliminate the second default UDP port (750) when parsing profile
> kdc entries.  When a KDC is inaccessible, this causes extra delays,
> and also extra DNS requests due to the way the code is structured.  We
> have always restricted the second default port to UDP over IPv4,
> likely because it was intended as a krb4 transition measure.
>
> Unfortunately, this change is likely to break a handful of deployments
> which happen to serve KDC requests only on port 750 and win because
> they only need it to work over IPv4 UDP (and don't have any Heimdal
> clients, or configure their Heimdal clients to use port 750
> explicitly).  I'm not sure if it's worth not breaking these
> environments at the cost of extra delays in more common cases.


I hope I don't wind up regretting this, given our AFS stuff, but I think this is a good idea.  Port 750 should go the way of Kerb 4.

Only time I've run into a related problem was with a firewall that allowed outbound TCP (to my KDC), but not UDP.  Caused some weird failures since TCP wasn't always tried, which the user (another NASA center) fixed by fixing the firewall.  It had nothing to to with port numbers per se, but that affected the retry logic.  Sorry I don't remember the client platform/version.

------------------------------------------------------
The opinions expressed in this message are mine,
not those of Caltech, JPL, NASA, or the US Government.
[hidden email], or [hidden email]




_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Roland C. Dowdeswell
In reply to this post by Greg Hudson
On Sun, Feb 13, 2011 at 04:52:24PM -0500, [hidden email] wrote:
>

> 2. Speeding up the client retry loop, so that it doesn't take as long
> to time out when you're behind a firewall which black-holes port 88.
> Currently we wait one second per UDP address per pass (and per TCP
> address on the first pass), and also wait 2s/4s/8s/16s (or 30s in
> total) at the end of each pass.
>
> In order to be nice to KDC load, I think it's still prudent to wait
> one second per server address on the first pass.  After that we're
> mostly trying to be nice to the network, and networks have gotten much
> faster.  So I think once we reach the end of the first pass, we ought
> to speed everything up by a factor of ten--that is, wait only 100ms
> between UDP queries on the second and later passes, and wait
> 200ms/400ms/800ms/1600ms at the end of passes.

It might be an idea to make the intrakdc delay be configurable.
In certain environments, one can be pretty sure of the maximum
delay you are going to experience and tune this appropriately.
Although, it might be one too many knobs.

To deal with firewalls that block UDP, perhaps the right answer
might not be to serially try all of the KDCs over UDP and failover
to TCP but rather interleave the requests in some fashion.  This
would reduce the delay in discovery of UDP issues.

Also, it might be a better idea in the longer term to write a little
daemon that runs as root, listens on a UNIX domain socket and
accepts requests from the krb5 libs to have conversations with
various KDCs.  The advantage of this would be that this daemon
could keep track of which KDCs are up and perhaps even keep track
of which ones answer the quickest (and are therefore likely the
closest), etc.

--
    Roland Dowdeswell                      http://Imrryr.ORG/~elric/
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Greg Hudson
In reply to this post by Henry B. Hotz
On Mon, 2011-02-14 at 13:14 -0500, Henry B. Hotz wrote:
> Agree with the eventual goal.  Maybe it's just me, but I'm not yet
> comfortable with depending on referrals instead of the traditional
> realm/domain-walk.  Wouldn't want it turned off by default until
> Solaris 10 clients support referral tickets by default (which I
> haven't checked).

We wouldn't be relying on service principal referrals.  We'd be relying
on the behavior where if you ask a KDC for krbtgt/OTHERREALM@LOCALREALM,
the KDC performs the realm walk internally (or uses its own capaths
configuration) and responds with an intermediate TGT.  That KDC logic
has been implemented since at least MIT krb5 1.1 and probably every
release of Heimdal and Active Directory.

I'm not sure how Solaris 10 client behavior would have an impact on this
anyway, since we're not talking about changing KDC logic.

To elaborate, let's say you're in the realm MY.HOME.REALM.COM and you
try to "ssh dialup.far.off.org", where far.off.org has never heard of
Kerberos and certainly has no mapping in your client's domain_realm
profile.  Currently we will try referrals first (so we'll query
host/dialup.far.off.org@LOCALREALM with referrals).

When that comes up empty we'll guess that the machine might be in the
FAR.OFF.ORG realm and query for krbtgt/[hidden email].
That part's great and we'll keep doing it.  What we don't want is for
the client to keep trying intermediate realm candidates:

  krbtgt/[hidden email]
  krbtgt/[hidden email]
  krbtgt/[hidden email]
  krbtgt/[hidden email]
  krbtgt/[hidden email]

These queries are all pointless; we have high confidence that the KDC
already searched for those principals when the client made its initial
krbtgt query.


_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Simo Sorce
In reply to this post by Roland C. Dowdeswell
On Mon, 14 Feb 2011 18:35:14 +0000
"Roland C. Dowdeswell" <[hidden email]> wrote:

> Also, it might be a better idea in the longer term to write a little
> daemon that runs as root, listens on a UNIX domain socket and
> accepts requests from the krb5 libs to have conversations with
> various KDCs.  The advantage of this would be that this daemon
> could keep track of which KDCs are up and perhaps even keep track
> of which ones answer the quickest (and are therefore likely the
> closest), etc.

You can do this separately by creating a locator plugin.
That's what we do with the SSSD project at least, so that the sssd
daemon does the discovery and just tells the krb5 libs what is the ip
address to use for the KDC.

Simo.

--
Simo Sorce * Red Hat, Inc * New York
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Henry B. Hotz
In reply to this post by Greg Hudson
Thanks for the clarification, especially of the client/KDC overlap.  That reduces my discomfort considerably.

On Feb 14, 2011, at 10:36 AM, Greg Hudson wrote:

> On Mon, 2011-02-14 at 13:14 -0500, Henry B. Hotz wrote:
>> Agree with the eventual goal.  Maybe it's just me, but I'm not yet
>> comfortable with depending on referrals instead of the traditional
>> realm/domain-walk.  Wouldn't want it turned off by default until
>> Solaris 10 clients support referral tickets by default (which I
>> haven't checked).
>
> We wouldn't be relying on service principal referrals.  We'd be relying
> on the behavior where if you ask a KDC for krbtgt/OTHERREALM@LOCALREALM,
> the KDC performs the realm walk internally (or uses its own capaths
> configuration) and responds with an intermediate TGT.  That KDC logic
> has been implemented since at least MIT krb5 1.1 and probably every
> release of Heimdal and Active Directory.
>
> I'm not sure how Solaris 10 client behavior would have an impact on this
> anyway, since we're not talking about changing KDC logic.
>
> To elaborate, let's say you're in the realm MY.HOME.REALM.COM and you
> try to "ssh dialup.far.off.org", where far.off.org has never heard of
> Kerberos and certainly has no mapping in your client's domain_realm
> profile.  Currently we will try referrals first (so we'll query
> host/dialup.far.off.org@LOCALREALM with referrals).
>
> When that comes up empty we'll guess that the machine might be in the
> FAR.OFF.ORG realm and query for krbtgt/[hidden email].
> That part's great and we'll keep doing it.  What we don't want is for
> the client to keep trying intermediate realm candidates:
>
>  krbtgt/[hidden email]
>  krbtgt/[hidden email]
>  krbtgt/[hidden email]
>  krbtgt/[hidden email]
>  krbtgt/[hidden email]
>
> These queries are all pointless; we have high confidence that the KDC
> already searched for those principals when the client made its initial
> krbtgt query.

------------------------------------------------------
The opinions expressed in this message are mine,
not those of Caltech, JPL, NASA, or the US Government.
[hidden email], or [hidden email]




_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

hartmans
In reply to this post by Simo Sorce
>>>>> "Simo" == Simo Sorce <[hidden email]> writes:

    Simo> On Mon, 14 Feb 2011 18:35:14 +0000
    Simo> "Roland C. Dowdeswell" <[hidden email]> wrote:

> Also, it might be a better idea in the longer term to write a little
    >> daemon that runs as root, listens on a UNIX domain socket and
    >> accepts requests from the krb5 libs to have conversations with
    >> various KDCs.  The advantage of this would be that this daemon
    >> could keep track of which KDCs are up and perhaps even keep track
    >> of which ones answer the quickest (and are therefore likely the
    >> closest), etc.

    Simo> You can do this separately by creating a locator plugin.
    Simo> That's what we do with the SSSD project at least, so that the
    Simo> sssd daemon does the discovery and just tells the krb5 libs
    Simo> what is the ip address to use for the KDC.

Yes, but I think that this is important enough to Kerberos performance
that someone should really do this separately from SSSD.  If you're
going to use SSSD, or some full infrastructure, you'll use their KDC
locator.  However, you really want this service.  All the time. Even if
you just want a Kerberos client.
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Roland C. Dowdeswell
On Mon, Feb 14, 2011 at 07:34:51PM -0500, Sam Hartman wrote:
>

> >>>>> "Simo" == Simo Sorce <[hidden email]> writes:
>
>     Simo> On Mon, 14 Feb 2011 18:35:14 +0000
>     Simo> "Roland C. Dowdeswell" <[hidden email]> wrote:
>
> > Also, it might be a better idea in the longer term to write a little
>     >> daemon that runs as root, listens on a UNIX domain socket and
>     >> accepts requests from the krb5 libs to have conversations with
>     >> various KDCs.  The advantage of this would be that this daemon
>     >> could keep track of which KDCs are up and perhaps even keep track
>     >> of which ones answer the quickest (and are therefore likely the
>     >> closest), etc.
>
>     Simo> You can do this separately by creating a locator plugin.
>     Simo> That's what we do with the SSSD project at least, so that the
>     Simo> sssd daemon does the discovery and just tells the krb5 libs
>     Simo> what is the ip address to use for the KDC.
>
> Yes, but I think that this is important enough to Kerberos performance
> that someone should really do this separately from SSSD.  If you're
> going to use SSSD, or some full infrastructure, you'll use their KDC
> locator.  However, you really want this service.  All the time. Even if
> you just want a Kerberos client.

I was considering writing something quite cheezy for this at work
that supported UDP only.  It would just be a UDP relay more or less
with a little logic about who is responding and how fast with no
analysis of the contents of either the outgoing packets or the
replies.

This would have a couple of advantages:

        1.  you wouldn't need to understand the contents of UDP packets
            at all, all you would need to do is time the responses
            and keep track of which KDCs were responding and maintain
            a bit of an algorithm to keep that fresh, and

        2.  it would require no client-side support at all, just put

                kdc = 127.0.0.1:88

            in the realm section of /etc/krb5.conf and it would
            work.  This was important when I was considering it
            because I have to ensure that many different Kerberos
            libraries and versions would work properly.  The main
            culprit being the Java JGSS libraries whose failover
            behavior really wants a little help of this sort.
            Multiple realms could be supported by running UDP relays
            on multiple ports.  One could also put the regular KDCs
            after the ``kdc = localhost'' line which would provide
            the assurance that if the proxy crashed, the clients
            would behave almost identically to how they might have
            if it hadn't been written.

Effectively, it would be equivalent to dynamically putting the most
responsive KDC at the head of the KDC list at all times.

That said, we didn't really have a performance issues and so it
wasn't considered to be a priority.  It was interesting to consider
for a while, though.

Doing it the cheezy way as described above, provides the advantage
that any Kerberos client library can use it, however it leaves lot
of the performance enhancements that could be achieved on the table.
If one took the approach of a UNIX domain socket with which the
clients have a more interesting conversation where they let the
proxy know what their intent is, then all sorts of things such as
the xrealm graph could be cached to avoid repeating the traversals
mentioned at the beginning of this thread.  Although, at that point
this would be more like a new ccache type that operates over a
local socket rather than a simple KDC proxy.

There is a large advantage to be had from keeping a little bit of
states on the client hosts which can be shared amongst the various
client applications especially if said state is [safely] shared
between users.

--
    Roland Dowdeswell                      http://Imrryr.ORG/~elric/
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Simo Sorce
In reply to this post by hartmans
On Mon, 14 Feb 2011 19:34:51 -0500
Sam Hartman <[hidden email]> wrote:

> >>>>> "Simo" == Simo Sorce <[hidden email]> writes:
>
>     Simo> On Mon, 14 Feb 2011 18:35:14 +0000
>     Simo> "Roland C. Dowdeswell" <[hidden email]> wrote:
>
> > Also, it might be a better idea in the longer term to write a little
>     >> daemon that runs as root, listens on a UNIX domain socket and
>     >> accepts requests from the krb5 libs to have conversations with
>     >> various KDCs.  The advantage of this would be that this daemon
>     >> could keep track of which KDCs are up and perhaps even keep
>     >> track of which ones answer the quickest (and are therefore
>     >> likely the closest), etc.
>
>     Simo> You can do this separately by creating a locator plugin.
>     Simo> That's what we do with the SSSD project at least, so that
>     Simo> the sssd daemon does the discovery and just tells the krb5
>     Simo> libs what is the ip address to use for the KDC.
>
> Yes, but I think that this is important enough to Kerberos performance
> that someone should really do this separately from SSSD.  If you're
> going to use SSSD, or some full infrastructure, you'll use their KDC
> locator.  However, you really want this service.  All the time. Even
> if you just want a Kerberos client.

Then it may be best to define a socket based communication protocol so
that only one daemon at a time can do it (consistency) and others can
provide the service w/o having plugins piling up on another.

Simo.

--
Simo Sorce * Red Hat, Inc * New York
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

hartmans
That would be fine.
I think we can defer working on how to do that until someone other than
SSSD indicates they plan to implement.
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

hartmans
In reply to this post by Roland C. Dowdeswell
I agree with Simo: this probably wants to be a KDC location plugin not
something in the socket transport path.
Hmm, except how do you actually track if the KDC is up?

Well, we've wanted plugable KDC transports for a while:-)
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Nico-74
In reply to this post by Greg Hudson
On Sun, Feb 13, 2011 at 04:52:24PM -0500, [hidden email] wrote:

> 2. Speeding up the client retry loop, so that it doesn't take as long
> to time out when you're behind a firewall which black-holes port 88.
> Currently we wait one second per UDP address per pass (and per TCP
> address on the first pass), and also wait 2s/4s/8s/16s (or 30s in
> total) at the end of each pass.
>
> In order to be nice to KDC load, I think it's still prudent to wait
> one second per server address on the first pass.  After that we're
> mostly trying to be nice to the network, and networks have gotten much
> faster.  So I think once we reach the end of the first pass, we ought
> to speed everything up by a factor of ten--that is, wait only 100ms
> between UDP queries on the second and later passes, and wait
> 200ms/400ms/800ms/1600ms at the end of passes.

It'd be nice if the client could do a lightweight ping of multiple TGSes
in parallel...

For example, send a well-formed TGS-REQ to the first KDC and also send
less well-formed KDC-REQs to two other KDCs -- the latter should cause
the KDC to respond with a KRB-ERROR without wasting any compute
resources on crypto.  For example, the 'from' time in the not-well-
formed requests could be very far in the past.  By the time the first
request times out the client will also know if any of the other KDCs are
alice, thus not likely to timeout.  If TGSes generally validate the
PA-TGS before validating the KDC-REQ-BODY then either use an AS-REQ or
find a way to malform (if I may verbify the adjective) the PA-TGS so as
to produce a KRB-ERROR quickly.

And, as Roland suggests, maybe there should be a client-local daemon
that pings KDCs in a similar fashion so as to maintain a locally cached
list of live KDCs (sorted by round-trip time in fastest-to-slowest
order).

> 3. Eliminate the second default UDP port (750) when parsing profile
> kdc entries.  When a KDC is inaccessible, this causes extra delays,
> and also extra DNS requests due to the way the code is structured.  We
> have always restricted the second default port to UDP over IPv4,
> likely because it was intended as a krb4 transition measure.
>
> Unfortunately, this change is likely to break a handful of deployments
> which happen to serve KDC requests only on port 750 and win because
> they only need it to work over IPv4 UDP (and don't have any Heimdal
> clients, or configure their Heimdal clients to use port 750
> explicitly).  I'm not sure if it's worth not breaking these
> environments at the cost of extra delays in more common cases.

I'm OK with this as long as people have enough warning about port 750
and/or there's a way to re-enable operation on port 750.

Nico
--
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Greg Hudson
On Tue, 2011-02-15 at 14:39 -0500, Nico wrote:
> I'm OK with this as long as people have enough warning about port 750
> and/or there's a way to re-enable operation on port 750.

Reenabling operation on port 750 is easy; you just specify hostname:750
in the profile string.  I'm not sure how to give any warning.


_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Nico-74
In reply to this post by hartmans
On Mon, Feb 14, 2011 at 10:39:27PM -0500, Sam Hartman wrote:
> Hmm, except how do you actually track if the KDC is up?

You send bogus KDC requests and expect a KRB-ERROR back.  The bogus
request should be such that the KDC will not spend any compute resources
on crypto and will send a KRB-ERROR back.  My guess is that the simplest
such bogosity would be to send a really old from/till time in the
KDC-REQ-BODY.

Nico
--
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

hartmans
In reply to this post by Greg Hudson
>>>>> "Greg" == Greg Hudson <[hidden email]> writes:

    Greg> On Tue, 2011-02-15 at 14:39 -0500, Nico wrote:
    >> I'm OK with this as long as people have enough warning about port
    >> 750 and/or there's a way to re-enable operation on port 750.

    Greg> Reenabling operation on port 750 is easy; you just specify
    Greg> hostname:750 in the profile string.  I'm not sure how to give
    Greg> any warning.


If we want to give warning, probably we should send mail to
kerberos-announce
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Nico Williams
In reply to this post by Greg Hudson
Ways to warn: post on the list, put a warning in the release notes of the
next release, ...

Nico
--
On Feb 15, 2011 2:49 PM, "Greg Hudson" <[hidden email]> wrote:
> On Tue, 2011-02-15 at 14:39 -0500, Nico wrote:
>> I'm OK with this as long as people have enough warning about port 750
>> and/or there's a way to re-enable operation on port 750.
>
> Reenabling operation on port 750 is easy; you just specify hostname:750
> in the profile string. I'm not sure how to give any warning.
>
>
_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Greg Hudson
On Tue, 2011-02-15 at 15:08 -0500, Nico Williams wrote:
> Ways to warn: post on the list, put a warning in the release notes of
> the next release, ...

Ah, yes, of course.  I was imagining something the software would do
when it wound up using the port 750 fallback, which would be tricky
since krb5_sendto_kdc can't exactly pop up an alert window.


_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev
Reply | Threaded
Open this post in threaded view
|

Re: KDC query client performance

Henry B. Hotz
In reply to this post by Greg Hudson

On Feb 16, 2011, at 8:48 AM, [hidden email] wrote:

> It'd be nice if the client could do a lightweight ping of multiple TGSes
> in parallel...


. . . that didn't open the kdc up to a ping-pong attack.  I'm sure there's some value of "less well formed" which satisfies this, but I wonder if the issue isn't complex enough to need standards action, not merely an implementation choice.
------------------------------------------------------
The opinions expressed in this message are mine,
not those of Caltech, JPL, NASA, or the US Government.
[hidden email], or [hidden email]




_______________________________________________
krbdev mailing list             [hidden email]
https://mailman.mit.edu/mailman/listinfo/krbdev