kadmin: failing dump/load

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

kadmin: failing dump/load

Patrik Lundin-2
Hello,

I have noticed a problem with kadmin -l dump/load when a principal
has a long list of previous passwords where it eventually will fail to
be restored:

===
root@kdc-lab-master1:~# kadmin -l get testprinc
            Principal: [hidden email]
    Principal expires: never
     Password expires: never
 Last password change: 2017-11-06 12:07:27 UTC
      Max ticket life: 1 day
   Max renewable life: 1 week
                 Kvno: 1
                Mkvno: unknown
Last successful login: never
    Last failed login: never
   Failed login count: 0
        Last modified: 2017-11-06 12:07:27 UTC
             Modifier: kadmin/[hidden email]
           Attributes:
             Keytypes: aes256-cts-hmac-sha1-96(pw-salt)[1], des3-cbc-sha1(pw-salt)[1], arcfour-hmac-md5(pw-salt)[1]
          PK-INIT ACL:
              Aliases:
===

A dump and load works without complaints:
===
root@kdc-lab-master1:~# kadmin -l dump hdb-backup
root@kdc-lab-master1:~# kadmin -l load hdb-backup
===

Changing the password once and everything is still fine:
===
root@kdc-lab-master1:~# kadmin -l cpw -p test --keepold testprinc
root@kdc-lab-master1:~# kadmin -l get testprinc
            Principal: [hidden email]
    Principal expires: never
     Password expires: never
 Last password change: 2017-11-06 16:25:57 UTC
      Max ticket life: 1 day
   Max renewable life: 1 week
                 Kvno: 2
                Mkvno: unknown
Last successful login: never
    Last failed login: never
   Failed login count: 0
        Last modified: 2017-11-06 16:25:57 UTC
             Modifier: kadmin/[hidden email]
           Attributes:
             Keytypes: aes256-cts-hmac-sha1-96(pw-salt)[2], des3-cbc-sha1(pw-salt)[2], arcfour-hmac-md5(pw-salt)[2], aes256-cts-hmac-sha1-96(pw-salt)[1], des3-cbc-sha1(pw-salt)[1], arcfour-hmac-md5(pw-salt)[1]
          PK-INIT ACL:
              Aliases:
===

A dump and load still works:
===
root@kdc-lab-master1:~# kadmin -l dump hdb-backup
root@kdc-lab-master1:~# kadmin -l load hdb-backup
===

However, doing the password change a few more times and the restore will
break. For me the magic number seems to be kvno 12. Here is an example
just before it breaks:
===
root@kdc-lab-master1:~# kadmin -l cpw -p test --keepold testprinc
[...]
root@kdc-lab-master1:~# kadmin -l get testprinc
            Principal: [hidden email]
    Principal expires: never
     Password expires: never
 Last password change: 2017-11-06 16:28:39 UTC
      Max ticket life: 1 day
   Max renewable life: 1 week
                 Kvno: 11
                Mkvno: unknown
Last successful login: never
    Last failed login: never
   Failed login count: 0
        Last modified: 2017-11-06 16:28:39 UTC
             Modifier: kadmin/[hidden email]
           Attributes:
             Keytypes: aes256-cts-hmac-sha1-96(pw-salt)[11], des3-cbc-sha1(pw-salt)[11], arcfour-hmac-md5(pw-salt)[11], aes256-cts-hmac-sha1-96(pw-salt)[1], des3-cbc-sha1(pw-salt)[1], arcfour-hmac-md5(pw-salt)[1], aes256-cts-hmac-sha1-96(pw-salt)[2], des3-cbc-sha1(pw-salt)[2], arcfour-hmac-md5(pw-salt)[2], aes256-cts-hmac-sha1-96(pw-salt)[3], des3-cbc-sha1(pw-salt)[3], arcfour-hmac-md5(pw-salt)[3], aes256-cts-hmac-sha1-96(pw-salt)[4], des3-cbc-sha1(pw-salt)[4], arcfour-hmac-md5(pw-salt)[4], aes256-cts-hmac-sha1-96(pw-salt)[5], des3-cbc-sha1(pw-salt)[5], arcfour-hmac-md5(pw-salt)[5], aes256-cts-hmac-sha1-96(pw-salt)[6], des3-cbc-sha1(pw-salt)[6], arcfour-hmac-md5(pw-salt)[6], aes256-cts-hmac-sha1-96(pw-salt)[7], des3-cbc-sha1(pw-salt)[7], arcfour-hmac-md5(pw-salt)[7], aes256-cts-hmac-sha1-96(pw-salt)[8], des3-cbc-sha1(pw-salt)[8], arcfour-hmac-md5(pw-salt)[8], aes256-cts-hmac-sha1-96(pw-salt)[9], des3-cbc-sha1(pw-salt)[9], arcfour-hmac-md5(pw-salt)[9], aes256-cts-hmac-sha1-96(pw-salt)[10], des3-cbc-sha1(pw-salt)[10], arcfour-hmac-m
          PK-INIT ACL:
              Aliases:
===

.... and a dump/load works:
===
root@kdc-lab-master1:~# kadmin -l dump hdb-backup
root@kdc-lab-master1:~# kadmin -l load hdb-backup
===

However, doing one additional increment will push it past the breaking point:
===
root@kdc-lab-master1:~# kadmin -l cpw -p test --keepold testprinc
root@kdc-lab-master1:~# kadmin -l get testprinc
            Principal: [hidden email]
    Principal expires: never
     Password expires: never
 Last password change: 2017-11-06 16:32:35 UTC
      Max ticket life: 1 day
   Max renewable life: 1 week
                 Kvno: 12
                Mkvno: unknown
Last successful login: never
    Last failed login: never
   Failed login count: 0
        Last modified: 2017-11-06 16:32:35 UTC
             Modifier: kadmin/[hidden email]
           Attributes:
             Keytypes: aes256-cts-hmac-sha1-96(pw-salt)[12], des3-cbc-sha1(pw-salt)[12], arcfour-hmac-md5(pw-salt)[12], aes256-cts-hmac-sha1-96(pw-salt)[1], des3-cbc-sha1(pw-salt)[1], arcfour-hmac-md5(pw-salt)[1], aes256-cts-hmac-sha1-96(pw-salt)[2], des3-cbc-sha1(pw-salt)[2], arcfour-hmac-md5(pw-salt)[2], aes256-cts-hmac-sha1-96(pw-salt)[3], des3-cbc-sha1(pw-salt)[3], arcfour-hmac-md5(pw-salt)[3], aes256-cts-hmac-sha1-96(pw-salt)[4], des3-cbc-sha1(pw-salt)[4], arcfour-hmac-md5(pw-salt)[4], aes256-cts-hmac-sha1-96(pw-salt)[5], des3-cbc-sha1(pw-salt)[5], arcfour-hmac-md5(pw-salt)[5], aes256-cts-hmac-sha1-96(pw-salt)[6], des3-cbc-sha1(pw-salt)[6], arcfour-hmac-md5(pw-salt)[6], aes256-cts-hmac-sha1-96(pw-salt)[7], des3-cbc-sha1(pw-salt)[7], arcfour-hmac-md5(pw-salt)[7], aes256-cts-hmac-sha1-96(pw-salt)[8], des3-cbc-sha1(pw-salt)[8], arcfour-hmac-md5(pw-salt)[8], aes256-cts-hmac-sha1-96(pw-salt)[9], des3-cbc-sha1(pw-salt)[9], arcfour-hmac-md5(pw-salt)[9], aes256-cts-hmac-sha1-96(pw-salt)[10], des3-cbc-sha1(pw-salt)[10], arcfour-hmac-m
          PK-INIT ACL:
              Aliases:
===

While it can still be displayed with kadmin (and authenticated to with kinit) the dump and load will fail:
===
root@kdc-lab-master1:~# kadmin -l load hdb-backup
hdb-backup:2:error parsing extension (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)
hdb-backup:3:error parsing keys ()
===

... and the principal is missing from the restored database:
===
root@kdc-lab-master1:~# kadmin -l get testprinc
kadmin: get testprinc: Principal does not exist
===

Line 2 above matches the testprinc principal (and the XXXXX part matches
part of the string in the dump file), I am not sure why the problem
seems to leak over to line 3. I have actually seen it cause errors for
up to 3 followup lines.

You might wonder "why would anyone ever use --keepold with kadmin like
this" and while you will probably not do that, the bigger problem is
that kpasswdd does this unconditionally (I asked about that on an
unrelated note a while back but it went unanswered:
http://www.h5l.org/pipermail/heimdal-discuss/2016-October/000047.html).

This means backups of the database will become useless when users change
their password via kpasswdd one too many times.

I have not yet done a deep dive of this, but I guess there can be three
points where the problem might live: A problem with the heimdal.db
internal format, a problem with the kadmin dump command, or a problem
with the kadmin load command. The fact that I am able to kinit with the
account (prior to dump/load where it disappears) makes me want to zero
in on the kadmin code.

--
Patrik Lundin
Reply | Threaded
Open this post in threaded view
|

Re: kadmin: failing dump/load

Patrik Lundin-2
On 2017-11-06 17:55:05, Patrik Lundin wrote:
>
> While it can still be displayed with kadmin (and authenticated to with kinit) the dump and load will fail:
> ===
> root@kdc-lab-master1:~# kadmin -l load hdb-backup
> hdb-backup:2:error parsing extension (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)
> hdb-backup:3:error parsing keys ()
> ===
>

Just a quick follow-up:
To be clear, the above example was missing the preceeding "kadmin -l
dump hdb-backup". I should also point out that the dump operation gives no
indiciation that something went wrong, so either that indicates that the
problem lies with the load code, or it means the problem is not properly
detected.

This of course means there is no heads-up that you might be generating
backup files where certain principals can not be restored. The problem
will not show itself until you are actually performing a load.

--
Patrik Lundin
Reply | Threaded
Open this post in threaded view
|

Re: kadmin: failing dump/load

Viktor Dukhovni-2


> On Nov 6, 2017, at 12:20 PM, Patrik Lundin <[hidden email]> wrote:
>
> On 2017-11-06 17:55:05, Patrik Lundin wrote:
>>
>> While it can still be displayed with kadmin (and authenticated to with kinit) the dump and load will fail:
>> ===
>> root@kdc-lab-master1:~# kadmin -l load hdb-backup
>> hdb-backup:2:error parsing extension (XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)
>> hdb-backup:3:error parsing keys ()
>> ===
>>
>
> Just a quick follow-up:
> To be clear, the above example was missing the preceeding "kadmin -l
> dump hdb-backup". I should also point out that the dump operation gives no
> indiciation that something went wrong, so either that indicates that the
> problem lies with the load code, or it means the problem is not properly
> detected.
>
> This of course means there is no heads-up that you might be generating
> backup files where certain principals can not be restored. The problem
> will not show itself until you are actually performing a load.

See:

https://github.com/heimdal/heimdal/commit/d2130e3312089a3142e89b316d2900fa21855726

I'd also like to recommend the "prune-key-history" setting...

--
        Viktor.

Reply | Threaded
Open this post in threaded view
|

Re: kadmin: failing dump/load

Patrik Lundin-2
On 2017-11-06 13:08:25, Viktor Dukhovni wrote:
>
> See:
>
> https://github.com/heimdal/heimdal/commit/d2130e3312089a3142e89b316d2900fa21855726
>

Interesting! I notice that patch does not apply to 7.4.0 as-is, is there
a new release planned soon or am I better off spending time getting it
to apply to the current release?

>
> I'd also like to recommend the "prune-key-history" setting...
>

I have completely missed that functionality, thanks for pointing it out.
After testing it for a bit I have a few questions:

The documentation and commit messages only mentions that keys are pruned
when you add new keys, but I noticed that doing a "get" with kadmin will
also prune keys (because it is called in kadm5_s_get_principal()). Since
I have traditionally seen "get" as a read-only operation maby this
should be mentioned? This means that you can not inspect the database
(short of dumping it with kadmin -l dump) without possibly altering it
which might not be expected (though I do see the helpful side of being
able to easily prune keys on demand).

Additionally, I see that the prune code basically figures out the most
recent keyset timestamp that is older than the ticket maximum lifetime and
then removes all keysets with a timestamp older than that one. This
means that this too old but most recent keyset will be kept, but since
it was calculated to be older than max-lifetime, shouldnt it be removed
as well? As it stands now there will always remain at least one keyset
(depending on if there are more than one keyset with the same timestamp)
too old to be useful.

Would it make sense to alter the code to also remove that most recent
(but too old) keyset? The only functional difference of the diff below
is the "<" to "<=" change at the bottom, but it felt like it would
warrant a more fitting variable name as well:

===
diff --git a/lib/hdb/keys.c b/lib/hdb/keys.c
index 96c221e..44de789 100644
--- a/lib/hdb/keys.c
+++ b/lib/hdb/keys.c
@@ -237,7 +237,7 @@ hdb_prune_keys(krb5_context context, hdb_entry *entry)
  "kadmin", "prune-key-history", NULL)) {
  hdb_keyset *elem;
  time_t ceiling = time(NULL) - *entry->max_life;
- time_t keep_time = 0;
+ time_t prune_time = 0;
  size_t i;
 
  /*
@@ -247,15 +247,15 @@ hdb_prune_keys(krb5_context context, hdb_entry *entry)
  for (i = 0; i < nelem; ++i) {
     elem = &keys->val[i];
     if (elem->set_time && *elem->set_time < ceiling
- && (keep_time == 0 || *elem->set_time > keep_time))
- keep_time = *elem->set_time;
+ && (prune_time == 0 || *elem->set_time > prune_time))
+ prune_time = *elem->set_time;
  }
 
  /* Drop obsolete entries */
- if (keep_time) {
+ if (prune_time) {
     for (i = 0; i < nelem; /* see below */) {
  elem = &keys->val[i];
- if (elem->set_time && *elem->set_time < keep_time) {
+ if (elem->set_time && *elem->set_time <= prune_time) {
     remove_HDB_Ext_KeySet(keys, i);
     /*
      * Removing the i'th element shifts the tail down, continue
===

I can open a PR at github if this makes sense to you. I have been able to test
it successfully against 7.4.0 at least.

--
Patrik Lundin
Reply | Threaded
Open this post in threaded view
|

Re: kadmin: failing dump/load

Patrik Lundin-2
On 2017-11-07 10:55:08, Patrik Lundin wrote:
>
> Would it make sense to alter the code to also remove that most recent
> (but too old) keyset? The only functional difference of the diff below
> is the "<" to "<=" change at the bottom, but it felt like it would
> warrant a more fitting variable name as well:
>

I now realize that this patch alone is not good enough.

Doing some more tests I noticed that if I run with this patch and do
"cpw" of an existing principal with --keepold it will instantly prune
the newly archived keyset if the "Last password change" field was
older than the maximum ticket lifetime of the keys being rotated.

The reason for this instant pruning is that
hdb_add_current_keys_to_history() called as a part of "cpw" will use the
timestamp recorded in "Last password change" as the timestamp written to
the archived keyset, and then call hdb_prune_keys() which will see this
timestamp as too old and directly discard it.

This makes me wonder if the timestamp in an archived keyset should
reflect the "Last password change" field of the key it represents, or if
the timestamp should instead reflect the point in time that it got
archived. If the latter one makes sense that would probably make the
earlier mentioned patch work as expected since it would then reflect the
last time it could have been used to issue tickets.

An updated patch that seems to work better:
===
diff --git a/lib/hdb/keys.c b/lib/hdb/keys.c
index 96c221e..81468ff 100644
--- a/lib/hdb/keys.c
+++ b/lib/hdb/keys.c
@@ -237,7 +237,7 @@ hdb_prune_keys(krb5_context context, hdb_entry *entry)
  "kadmin", "prune-key-history", NULL)) {
  hdb_keyset *elem;
  time_t ceiling = time(NULL) - *entry->max_life;
- time_t keep_time = 0;
+ time_t prune_time = 0;
  size_t i;
 
  /*
@@ -247,15 +247,15 @@ hdb_prune_keys(krb5_context context, hdb_entry *entry)
  for (i = 0; i < nelem; ++i) {
     elem = &keys->val[i];
     if (elem->set_time && *elem->set_time < ceiling
- && (keep_time == 0 || *elem->set_time > keep_time))
- keep_time = *elem->set_time;
+ && (prune_time == 0 || *elem->set_time > prune_time))
+ prune_time = *elem->set_time;
  }
 
  /* Drop obsolete entries */
- if (keep_time) {
+ if (prune_time) {
     for (i = 0; i < nelem; /* see below */) {
  elem = &keys->val[i];
- if (elem->set_time && *elem->set_time < keep_time) {
+ if (elem->set_time && *elem->set_time <= prune_time) {
     remove_HDB_Ext_KeySet(keys, i);
     /*
      * Removing the i'th element shifts the tail down, continue
@@ -309,9 +309,7 @@ hdb_add_current_keys_to_history(krb5_context context, hdb_entry *entry)
     /*
      * Copy in newest old keyset
      */
-    ret = hdb_entry_get_pw_change_time(entry, &newtime);
-    if (ret)
- goto out;
+    time(&newtime);
 
     memset(&newkeyset, 0, sizeof(newkeyset));
     newkeyset.keys = entry->keys;
===

This of course changes the semantics of what an archived keyset
contains, and maby there are other parts of ecosystem that would get
confused by this.

--
Patrik Lundin
Reply | Threaded
Open this post in threaded view
|

Re: kadmin: failing dump/load

Roland C. Dowdeswell-2
In reply to this post by Patrik Lundin-2
On Tue, Nov 07, 2017 at 10:55:08AM +0100, Patrik Lundin wrote:
>

> Additionally, I see that the prune code basically figures out the most
> recent keyset timestamp that is older than the ticket maximum lifetime and
> then removes all keysets with a timestamp older than that one. This
> means that this too old but most recent keyset will be kept, but since
> it was calculated to be older than max-lifetime, shouldnt it be removed
> as well? As it stands now there will always remain at least one keyset
> (depending on if there are more than one keyset with the same timestamp)
> too old to be useful.
>
> Would it make sense to alter the code to also remove that most recent
> (but too old) keyset? The only functional difference of the diff below
> is the "<" to "<=" change at the bottom, but it felt like it would
> warrant a more fitting variable name as well:

It isn't too old, however.  What we are trying to achieve here is
that we keep all of the keys which might have outstanding tickets in
someone's ccache.

An example is illustrative:

        KDC has kvno 1 it was set ten weeks ago.
        You rotate the service keys.
        The KDC now has kvno 2 and it was issued seconds ago.
        You rotate the service keys.
        The KDC now has kvno 3 and it was issued seconds ago.

Should the KDC now clean up kvno 1 because it was set ten weeks ago?
No, because it was valid up until the time that kvno 2 was set which is
a few seconds ago.  And there may be outstanding service tickets for
kvno 1 that live for another 10 hours.  And so, you must keep kvno 1
until kvno 2 is at least as old as the max ticket lifetime.

--
    Roland C. Dowdeswell
Reply | Threaded
Open this post in threaded view
|

Re: kadmin: failing dump/load

Henry B Hotz
In reply to this post by Patrik Lundin-2
+1

> On Nov 7, 2017, at 1:55 AM, Patrik Lundin <[hidden email]> wrote:
>
> This means that you can not inspect the database
> (short of dumping it with kadmin -l dump) without possibly altering it
> which might not be expected (though I do see the helpful side of being
> able to easily prune keys on demand).

Personal email.  [hidden email]



Reply | Threaded
Open this post in threaded view
|

Re: kadmin: failing dump/load

Viktor Dukhovni-2


> On Nov 12, 2017, at 9:08 AM, Henry B (Hank) Hotz, CISSP <[hidden email]> wrote:
>
>> This means that you can not inspect the database
>> (short of dumping it with kadmin -l dump) without possibly altering it
>> which might not be expected (though I do see the helpful side of being
>> able to easily prune keys on demand).

The "get" functions DO NOT have side-effects of truncating key history,
that only happens on "set".  The "get" functions may return a proper
subset of the stored keys, but the stored entry remains unchanged.

--
        Viktor.

Reply | Threaded
Open this post in threaded view
|

Re: kadmin: failing dump/load

Patrik Lundin-2
In reply to this post by Roland C. Dowdeswell-2
On 2017-11-07 11:21:45, Roland C. Dowdeswell wrote:

> On Tue, Nov 07, 2017 at 10:55:08AM +0100, Patrik Lundin wrote:
> >
>
> > Additionally, I see that the prune code basically figures out the most
> > recent keyset timestamp that is older than the ticket maximum lifetime and
> > then removes all keysets with a timestamp older than that one. This
> > means that this too old but most recent keyset will be kept, but since
> > it was calculated to be older than max-lifetime, shouldnt it be removed
> > as well? As it stands now there will always remain at least one keyset
> > (depending on if there are more than one keyset with the same timestamp)
> > too old to be useful.
> >
> > Would it make sense to alter the code to also remove that most recent
> > (but too old) keyset? The only functional difference of the diff below
> > is the "<" to "<=" change at the bottom, but it felt like it would
> > warrant a more fitting variable name as well:
>
> It isn't too old, however.  What we are trying to achieve here is
> that we keep all of the keys which might have outstanding tickets in
> someone's ccache.
>

Ah, I was confused by the "get" calls censoring the returned entries.
I was under the impression hdb_prune_keys() actually modified the database, I
understand now that it only acts as a filter and the actual modification of
the database is performed separately when a new key is set.

It makes sense to leave at least one trailing keyset if they are only trimmed in
conjunction with the principal getting a new key.

Sorry for the confusion on the list!

--
Patrik Lundin
Reply | Threaded
Open this post in threaded view
|

Re: kadmin: failing dump/load

Patrik Lundin-2
In reply to this post by Viktor Dukhovni-2
On 2017-11-12 17:17:50, Viktor Dukhovni wrote:
>
> The "get" functions DO NOT have side-effects of truncating key history,
> that only happens on "set".  The "get" functions may return a proper
> subset of the stored keys, but the stored entry remains unchanged.
>

Thanks for setting the record straight, and sorry for the confusion on
my part.

Maby it is just me but I was surprised to have "get" not reflect the
actual contents in the database.

--
Patrik Lundin
Reply | Threaded
Open this post in threaded view
|

Re: kadmin: failing dump/load

Viktor Dukhovni-2


> On Nov 13, 2017, at 12:50 PM, Patrik Lundin <[hidden email]> wrote:
>
> Maby it is just me but I was surprised to have "get" not reflect the
> actual contents in the database.

IIRC that was needed to reliably get pruning to work, since some
kadmin clients add keys by doing get + add + set.  We also did not
want users with "get" privileges on the key values to get old keys
that can only be used to compromise captured traffic, and are not
needed for live keytab files.  Perhaps I am not recalling the
details quite right, but it made sense at the time...

--
        Viktor.

Reply | Threaded
Open this post in threaded view
|

Re: kadmin: failing dump/load

Patrik Lundin-2
In reply to this post by Patrik Lundin-2
On 2017-11-07 10:55:08, Patrik Lundin wrote:

> On 2017-11-06 13:08:25, Viktor Dukhovni wrote:
> >
> > See:
> >
> > https://github.com/heimdal/heimdal/commit/d2130e3312089a3142e89b316d2900fa21855726
> >
>
> Interesting! I notice that patch does not apply to 7.4.0 as-is, is there
> a new release planned soon or am I better off spending time getting it
> to apply to the current release?
>

Just a quick follow-up regarding this: I went ahead and investigated how
tricky it would be to apply that fix on 7.4.0, and it turned out to be
fairly straight forward.

The main problem was that hdb_set_sync() was introduced prior to the
fix, but including it ended up with this set of patches:
===
e3cc7dfb94826aa709a617fb8e59ab07db584f92: Set def page size for HDB SQLite3 backend to 8KB
d6979fcc40e00fd6ddd393c724cd728349b86de8: Allow LDAP modifications for entry creation
5bcbe2125b18160f6ad348b15f8036ffedc15770: Add hdb_set_sync() method
7d5f8bb051ca84592d1196bf5d5522da5a50f9d6: Disable sync during kadmin load
d2130e3312089a3142e89b316d2900fa21855726: Handle long lines in dump files
===

Just looking around it seems it might be suitable to also add in
305dc816525f461f9bfe640d87f671f53f0e0fc6 (Disable sync during iprop
receive_everything()), but I am trying to keep the customization to
a minimum.

Applying them in the listed order meant there was no patch fuzz, and I just now
managed to test a load of a large database dump that was previously failing. It
now finished without complaints, and a follow-up dump produced a file with a
sha256 hash matching the original backup file. Thanks a bunch for fixing the
problem.

--
Patrik Lundin