WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

Overview

WAL-G

Docker-tests-status Unit-tests-status Go Report Card Documentation Status

This documentation is also available at wal-g.readthedocs.io

WAL-G is an archival restoration tool for PostgreSQL, MySQL/MariaDB, and MS SQL Server (beta for MongoDB and Redis).

WAL-G is the successor of WAL-E with a number of key differences. WAL-G uses LZ4, LZMA, or Brotli compression, multiple processors, and non-exclusive base backups for Postgres. More information on the original design and implementation of WAL-G can be found on the Citus Data blog post "Introducing WAL-G by Citus: Faster Disaster Recovery for Postgres".

Table of Contents

Installation

A precompiled binary for Linux AMD 64 of the latest version of WAL-G can be obtained under the Releases tab.

To decompress the binary, use:

tar -zxvf wal-g.linux-amd64.tar.gz
mv wal-g /usr/local/bin/

For other incompatible systems, please consult the Development section for more information.

Configuration

Storage

To configure where WAL-G stores backups, please consult the Storages section.

Compression

  • WALG_COMPRESSION_METHOD

To configure the compression method used for backups. Possible options are: lz4, lzma, brotli. The default method is lz4. LZ4 is the fastest method, but the compression ratio is bad. LZMA is way much slower. However, it compresses backups about 6 times better than LZ4. Brotli is a good trade-off between speed and compression ratio, which is about 3 times better than LZ4.

Encryption

  • YC_CSE_KMS_KEY_ID

To configure Yandex Cloud KMS key for client-side encryption and decryption. By default, no encryption is used.

  • YC_SERVICE_ACCOUNT_KEY_FILE

To configure the name of a file containing private key of Yandex Cloud Service Account. If not set a token from the metadata service (http://169.254.169.254) will be used to make API calls to Yandex Cloud KMS.

  • WALG_LIBSODIUM_KEY

To configure encryption and decryption with libsodium. WAL-G uses an algorithm that only requires a secret key.

  • WALG_LIBSODIUM_KEY_PATH

Similar to WALG_LIBSODIUM_KEY, but value is the path to the key on file system. The file content will be trimmed from whitespace characters.

  • WALG_GPG_KEY_ID (alternative form WALE_GPG_KEY_ID) ⚠️ DEPRECATED

To configure GPG key for encryption and decryption. By default, no encryption is used. Public keyring is cached in the file "/.walg_key_cache".

  • WALG_PGP_KEY

To configure encryption and decryption with OpenPGP standard. You can join multiline key using \n symbols into one line (mostly used in case of daemontools and envdir). Set private key value when you need to execute wal-fetch or backup-fetch command. Set public key value when you need to execute wal-push or backup-push command. Keep in mind that the private key also contains the public key.

  • WALG_PGP_KEY_PATH

Similar to WALG_PGP_KEY, but value is the path to the key on file system.

  • WALG_PGP_KEY_PASSPHRASE

If your private key is encrypted with a passphrase, you should set passphrase for decrypt.

Database-specific options

More options are available for the chosen database. See it in Databases

Usage

WAL-G currently supports these commands for all type of databases:

backup-list

Lists names and creation time of available backups.

--pretty flag prints list in a table

--json flag prints list in JSON format, pretty-printed if combined with --pretty

--detail flag prints extra backup details, pretty-printed if combined with --pretty, json-encoded if combined with --json

delete

Is used to delete backups and WALs before them. By default, delete will perform a dry run. If you want to execute deletion, you have to add --confirm flag at the end of the command. Backups marked as permanent will not be deleted.

delete can operate in four modes: retain, before, everything and target.

retain [FULL|FIND_FULL] %number% [--after %name|time%]

if FULL is specified, keep %number% full backups and everything in the middle. If with --after flag is used keep $number$ the most recent backups and backups made after %name|time% (including).

before [FIND_FULL] %name%

If FIND_FULL is specified, WAL-G will calculate minimum backup needed to keep all deltas alive. If FIND_FULL is not specified, and call can produce orphaned deltas, the call will fail with the list.

everything [FORCE]

target [FIND_FULL] %name% | --target-user-data %data% will delete the backup specified by name or user data.

(Only in Postgres) By default, if delta backup is provided as the target, WAL-G will also delete all the dependant delta backups. If FIND_FULL is specified, WAL-G will delete all backups with the same base backup as the target.

Examples

everything all backups will be deleted (if there are no permanent backups)

everything FORCE all backups, include permanent, will be deleted

retain 5 will fail if 5th is delta

retain FULL 5 will keep 5 full backups and all deltas of them

retain FIND_FULL 5 will find necessary full for 5th and keep everything after it

retain 5 --after 2019-12-12T12:12:12 keep 5 most recent backups and backups made after 2019-12-12 12:12:12

before base_000010000123123123 will fail if base_000010000123123123 is delta

before FIND_FULL base_000010000123123123 will keep everything after base of base_000010000123123123

target base_0000000100000000000000C9 delete the base backup and all dependant delta backups

target --target-user-data "{ \"x\": [3], \"y\": 4 }" delete backup specified by user data

target base_0000000100000000000000C9_D_0000000100000000000000C4 delete delta backup and all dependant delta backups

target FIND_FULL base_0000000100000000000000C9_D_0000000100000000000000C4 delete delta backup and all delta backups with the same base backup

More commands are available for the chosen database engine. See it in Databases

Databases

PostgreSQL

Information about installing, configuration and usage

MySQL/MariaDB

Information about installing, configuration and usage

SQLServer

Information about installing, configuration and usage

Mongo [Beta]

Information about installing, configuration and usage

FoundationDB [Work in progress]

Information about installing, configuration and usage

Redis [Beta]

Information about installing, configuration and usage

Development

Installing

It is specified for your type of database.

Testing

WAL-G relies heavily on unit tests. These tests do not require S3 configuration as the upload/download parts are tested using mocked objects. Unit tests can be run using

make unittest

For more information on testing, please consult test, testtools and unittest section in Makefile.

WAL-G will perform a round-trip compression/decompression test that generates a directory for data (e.g., data...), compressed files (e.g., compressed), and extracted files (e.g., extracted). These directories will only get cleaned up if the files in the original data directory match the files in the extracted one.

Test coverage can be obtained using:

make coverage

This command generates coverage.out file and opens HTML representation of the coverage.

Development on Windows

Information about installing and usage

Authors

See also the list of contributors who participated in this project.

License

This project is licensed under the Apache License, Version 2.0, but the lzo support is licensed under GPL 3.0+. Please refer to the LICENSE.md file for more details.

Acknowledgments

WAL-G would not have happened without the support of Citus Data

WAL-G came into existence as a result of the collaboration between a summer engineering intern at Citus, Katie Li, and Daniel Farina, the original author of WAL-E, who currently serves as a principal engineer on the Citus Cloud team. Citus Data also has an open-source extension to Postgres that distributes database queries horizontally to deliver scale and performance.

WAL-G development is supported by Yandex Cloud

Chat

We have a Slack group and Telegram chat to discuss WAL-G usage and development. To join PostgreSQL slack, use invite app.

Issues
  • wal-g hangs on push for gcs

    wal-g hangs on push for gcs

    Hello, I have an issue with WAL-G v0.2.9 hanging on pushes to GCS sometimes.

    Today, I hopped on my box when disk alerts started going off. I took a look at my wal directory, and I saw a ton of transaction logs that shouldn't be there.

    I took a look at ps, and saw that wal-g had been running for over 24 hours on a single wal-push operation

    postgres  3253  0.0  0.0  19700  3208 ?        S    May07   0:00 /bin/bash /usr/local/bin/wal-g-wrapper.sh wal-push pg_wal/000000090000CF82000000D2
    postgres  3254  0.1  0.0 3328960 34344 ?       Sl   May07   1:04 /usr/local/bin/wal-g wal-push pg_wal/000000090000CF82000000D2
    

    To recover, I killed the process. WAL-G fired up again, ran the same command, but this time was successful. This has happened multiple times on multiple servers.

    opened by WillPlatnick 54
  • Out of memory error on deployment with millions of files in PGDATA

    Out of memory error on deployment with millions of files in PGDATA

    WAL-G version: 0.2.15 release binary

    PostgreSQL version: 9.5

    Operating system/version: CentOS 7.8

    Hi,

    I'm evaluating WAL-G as a replacement for my companys current backup system WAL-E. We have a schema-level style of sharding our tenants, where each of our database servers hosts 4000 customer schemas, spread over 16 databases. This has worked great for scalability, but presents a challenge for many backup systems - since this layout results in a lot of files in postgresql data directory.

    Output of ls -1RL /var/lib/pgsql/9.5/data/ | wc -l; 6602805

    So above 6 million files, total size 77gb. Around 5 years ago we deployed WAL-E since it was one of the few backup systems that could handle so many files without a problem. However, since WAL-E is no longer maintained we're looking for alternatives. When testing WAL-G, I'm running out of memory when performing a full-backup. It seems to be some kind of go internal memory error, since the memory of the server is not fully utilized. See graphs;

    Screenshot from 2020-08-24 12-28-14

    Server specs:

    • 12gb of ram
    • shared_buffers set to 3gb
    • 1800 hugepages (~ 3.6 gb to fit shared_buffers )
    • vm.overcommit = 2 (not allowed to overcommit)
    • 7.8 gb regular memory limit
    • CentOS 7
    • postgres 9.5
    • wal-g 0.2.15

    Chunk of the error log;

    INFO: 2020/08/24 11:49:19.354150 Calling pg_stop_backup()
    fatal error: runtime: out of memory
    
    runtime stack:
    runtime.throw(0x193bfd8, 0x16)
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/panic.go:774 +0x72
    runtime.sysMap(0xc110000000, 0x10000000, 0x271a7f8)
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/mem_linux.go:169 +0xc5
    runtime.(*mheap).sysAlloc(0x27017c0, 0xfbe0000, 0x150b986, 0x288007575f7)
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/malloc.go:701 +0x1cd
    runtime.(*mheap).grow(0x27017c0, 0x7df0, 0xffffffff)
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/mheap.go:1255 +0xa3
    runtime.(*mheap).allocSpanLocked(0x27017c0, 0x7df0, 0x271a808, 0x0)
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/mheap.go:1170 +0x266
    runtime.(*mheap).alloc_m(0x27017c0, 0x7df0, 0x7f20f7020100, 0x0)
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/mheap.go:1022 +0xc2
    runtime.(*mheap).alloc.func1()
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/mheap.go:1093 +0x4c
    runtime.(*mheap).alloc(0x27017c0, 0x7df0, 0x7f2109010100, 0x7f20f7027ff8)
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/mheap.go:1092 +0x8a
    runtime.largeAlloc(0xfbdfd80, 0x9e0001, 0x7f20f7027ff8)
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/malloc.go:1138 +0x97
    runtime.mallocgc.func1()
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/malloc.go:1033 +0x46
    runtime.systemstack(0x0)
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/asm_amd64.s:370 +0x66
    runtime.mstart()
            /home/travis/.gimme/versions/go1.13.9.linux.amd64/src/runtime/proc.go:1146
    

    Attatched the full log output when performing the backup, including the error as well; walg_log.txt

    Target datastore is a local minio S3 instance. Tried WALG_UPLOAD_DISK_CONCURRENCY set to 1 and 4, same result. I see no kernel level OOM-killer logs in syslog, it appears to fail internally.

    opened by MannerMan 42
  • archive file has wrong size

    archive file has wrong size

    Hi, we are currently testing wal-g with a small database that writes an entry every minute. When we try to restore the DB, we sometimes get this error:

    2018-02-10 05:48:59.478 EST [25962] FATAL:  archive file "000000010000000000000028" has wrong size: 8388608 instead of 16777216
    

    If we start over (rm -rf the data dir, backup-fetch, then recovery), we sometimes manage to fully restore the db, sometimes we get another similar error:

    2018-02-10 06:01:29.671 EST [27419] FATAL:  archive file "000000010000000000000027" has wrong size: 8388608 instead of 16777216
    

    The relevant part of the postgresql.conf:

    wal_level = logical
    archive_mode = on
    archive_command = 'envdir /etc/wal-g.d/env /usr/local/bin/wal-g wal-push %p'
    archive_timeout = 60
    

    The recovery.conf file when we try to restore the db on a secondary cluster:

    restore_command = 'envdir /etc/wal-g.d/env /usr/local/bin/wal-g wal-fetch "%f" "%p"'
    

    We are not using GPG and we only declare basic environment variables:

    ls -l /etc/wal-g.d/env/
    total 12
    -rwxr-x--- 1 root postgres 13 Feb  9 04:32 AWS_REGION
    -rwxr-x--- 1 root postgres 16 Feb  9 04:53 PGHOST
    -rwxr-x--- 1 root postgres 39 Feb  9 04:32 WALE_S3_PREFIX
    
    opened by bartdag 37
  • backup-fetch fails with

    backup-fetch fails with "Interpret: copy failed"

    Two servers failed to boot up and restore their databases last night. These two servers were booting in different AWS Regions, restoring from the same S3 backup. They both failed on exactly the same file, which indicates that we pushed a corrupt backup in some way.

    The backup-push from the DB master had been run by hand, and there were no errors in the log output.

    Master Server Details:

    • Ubuntu 14.04
    • Postgres 9.6.6
    • WAL-G 0.1.7

    DB Size: ~1.8TB

    Server 1 Failure

    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: /base/16400/187655160
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: /base/16400/187655162
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: /base/16400/187655164
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 2018/04/05 10:56:19 unexpected EOF
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: Interpret: copy failed
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: github.com/wal-g/wal-g.(*FileTarInterpreter).Interpret
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/gopath/src/github.com/wal-g/wal-g/tar.go:86
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: github.com/wal-g/wal-g.extractOne
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:51
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: github.com/wal-g/wal-g.ExtractAll.func2.3
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:156
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: runtime.goexit
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/.gimme/versions/go1.8.7.linux.amd64/src/runtime/asm_amd64.s:2197
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: extractOne: Interpret failed
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: github.com/wal-g/wal-g.extractOne
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:53
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: github.com/wal-g/wal-g.ExtractAll.func2.3
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:156
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: runtime.goexit
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/.gimme/versions/go1.8.7.linux.amd64/src/runtime/asm_amd64.s:2197
    Info: Class[Wal_g::Db_restore]: Unscheduling all events on Class[Wal_g::Db_restore]
    

    Server 2 Failure

    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: /base/16400/187655162
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: /base/16400/187655164
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 2018/04/05 06:56:25 unexpected EOF
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: Interpret: copy failed
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: github.com/wal-g/wal-g.(*FileTarInterpreter).Interpret
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/gopath/src/github.com/wal-g/wal-g/tar.go:86
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: github.com/wal-g/wal-g.extractOne
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:51
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: github.com/wal-g/wal-g.ExtractAll.func2.3
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:156
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: runtime.goexit
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/.gimme/versions/go1.8.7.linux.amd64/src/runtime/asm_amd64.s:2197
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: extractOne: Interpret failed
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: github.com/wal-g/wal-g.extractOne
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:53
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: github.com/wal-g/wal-g.ExtractAll.func2.3
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:156
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: runtime.goexit
    Notice: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: 	/home/travis/.gimme/versions/go1.8.7.linux.amd64/src/runtime/asm_amd64.s:2197
    Info: Class[Wal_g::Db_restore]: Unscheduling all events on Class[Wal_g::Db_restore]
    STDERR> Error: /Stage[main]/Wal_g::Db_restore/Exec[wal_g::db_restore]/returns: change from 'notrun' to ['0'] failed: '/usr/local/bin/wal-g-restore.sh' returned 1 instead of one of [0]
    
    opened by diranged 36
  • [ERROR] Failed to fetch backup: Expect pg_control archive, but not found

    [ERROR] Failed to fetch backup: Expect pg_control archive, but not found

    Hello, I try use wal-g for backup=>restore (wal-g backup-push => wal-g backup-fetch) via S3-compatible service. Backup passed without any errors, but restore stuck on: ERROR: 2020/12/16 10:26:33.016402 Failed to fetch backup: Expect pg_control archive, but not found. But:

    [email protected]:~$ s3cmd ls s3://postgres-database-backups-wal-g/basebackups_005/base_00000001000106090000003D/tar_partitions/ | grep control
    2020-12-15 21:16       506   s3://postgres-database-backups-wal-g/basebackups_005/base_00000001000106090000003D/tar_partitions/pg_control.tar.lz4
    

    Log with DEBUG=DEVEL: https://pastebin.com/SyC78CA2

    ENV: wal-g 0.2.19 OS - linux (try 5.4.0 and 5.9.14) S3 - own s3-compatible service DB - postgresql 12

    P.S. it's pretty works with 0.1.x version (I try wal-g 0.1.17).

    opened by nicexe2e4 26
  • Can't restore due to missing archive, but archive never existed in first place

    Can't restore due to missing archive, but archive never existed in first place

    We are using WAL-G on 4 different database clusters. We're having issues restoring the database from this one cluster after an upgrade to PG10. This archive that it's looking for doesn't exist in S3.

    The WAL log it's looking for is 000000010000D6E5000000C1 and it's true, it doesn't exist.

    In fact, that whole series doesn't exist. I have no files that start with 000000010000D6 but I do have a ton with 000000010000D7

    I'm wondering if something with the timeline is getting messed up during backups or during restore somehow, but I'm not sure where to look first.

    2018-09-22 13:57:51.587 GMT [2278] LOG:  starting point-in-time recovery to 2018-09-22 07:00:00+00
    BUCKET: bucketname
    SERVER: 
    2018/09/22 08:57:52 Archive '%s' does not exist 000000010000D6E5000000C1
    github.com/wal-g/wal-g.DownloadAndDecompressWALFile
    	/home/travis/gopath/src/github.com/wal-g/wal-g/commands.go:590
    github.com/wal-g/wal-g.HandleWALFetch
    	/home/travis/gopath/src/github.com/wal-g/wal-g/commands.go:515
    main.main
    	/home/travis/gopath/src/github.com/wal-g/wal-g/cmd/wal-g/main.go:131
    runtime.main
    	/home/travis/.gimme/versions/go1.10.4.linux.amd64/src/runtime/proc.go:198
    runtime.goexit
    	/home/travis/.gimme/versions/go1.10.4.linux.amd64/src/runtime/asm_amd64.s:2361
    2018-09-22 13:57:52.014 GMT [2278] LOG:  invalid checkpoint record
    2018-09-22 13:57:52.014 GMT [2278] FATAL:  could not locate required checkpoint record
    2018-09-22 13:57:52.014 GMT [2278] HINT:  If you are not restoring from a backup, try removing the file "/var/lib/postgresql/10/main/backup_label".```
    opened by WillPlatnick 23
  • wal-g delete fails on WAL-E backups

    wal-g delete fails on WAL-E backups

    Database name

    PostgreSQL 12

    Issue description

    Describe your problem

    wal-g delete retain N fails if any backups are found from WAL-E

    Please provide steps to reproduce

    wal-e backup-push /var/lib/postgresql/12/main
    wal-e backup-push /var/lib/postgresql/12/main
    wal-e backup-push /var/lib/postgresql/12/main
    wal-e backup-push /var/lib/postgresql/12/main
    wal-g delete retain 3
    

    Please add config and wal-g stdout/stderr logs for debug purpose

    {
      "PGHOST": "/var/run/postgresql",
      "PGUSER": "postgres",
      "WALG_COMPRESSION_METHOD": "brotli",
      "WALG_DOWNLOAD_CONCURRENCY": 2,
      "WALG_FILE_PREFIX": "/mnt/pg_backup",
      "WALG_PREFETCH_DIR": "/var/lib/postgresql/prefetch",
      "WALG_PREVENT_WAL_OVERWRITE": true,
      "WALG_UPLOAD_CONCURRENCY": 1
    }
    
    If you can, provide logs
    INFO: 2022/06/10 16:31:16.361203 retrieving permanent objects
    ERROR: 2022/06/10 16:31:16.374858 failed to fetch backup meta for backup base_000000020000016B0000005E_00000040 with error failed to unmarshal metadata: object '/mnt/pg_backup/basebackups_005/base_000000020000016B0000005E_00000040/metadata.json' not found in storage
    github.com/wal-g/wal-g/pkg/storages/storage.NewObjectNotFoundError
            /home/runner/work/wal-g/wal-g/pkg/storages/storage/errors.go:15
    github.com/wal-g/wal-g/pkg/storages/fs.(*Folder).ReadObject
            /home/runner/work/wal-g/wal-g/pkg/storages/fs/folder.go:100
    github.com/wal-g/wal-g/internal.(*StorageReaderMaker).Reader
            /home/runner/work/wal-g/wal-g/internal/storage_reader_maker.go:31
    github.com/wal-g/wal-g/internal.FetchDto
            /home/runner/work/wal-g/wal-g/internal/backup.go:86
    github.com/wal-g/wal-g/internal.(*Backup).FetchMetadata
            /home/runner/work/wal-g/wal-g/internal/backup.go:72
    github.com/wal-g/wal-g/internal/databases/postgres.(*Backup).FetchMeta
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/backup.go:177
    github.com/wal-g/wal-g/internal/databases/postgres.GetPermanentBackupsAndWals
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/delete_util.go:23
    github.com/wal-g/wal-g/cmd/pg.runDeleteRetain
            /home/runner/work/wal-g/wal-g/cmd/pg/delete.go:79
    github.com/spf13/cobra.(*Command).execute
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:860
    github.com/spf13/cobra.(*Command).ExecuteC
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:974
    github.com/spf13/cobra.(*Command).Execute
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:902
    github.com/wal-g/wal-g/cmd/pg.Execute
            /home/runner/work/wal-g/wal-g/cmd/pg/pg.go:45
    main.main
            /home/runner/work/wal-g/wal-g/main/pg/main.go:8
    runtime.main
            /opt/hostedtoolcache/go/1.17.10/x64/src/runtime/proc.go:255
    runtime.goexit
            /opt/hostedtoolcache/go/1.17.10/x64/src/runtime/asm_amd64.s:1581, ignoring...
    ERROR: 2022/06/10 16:31:16.376487 failed to fetch backup meta for backup base_000000020000016B0000007A_00000040 with error failed to unmarshal metadata: object '/mnt/pg_backup/basebackups_005/base_000000020000016B0000007A_00000040/metadata.json' not found in storage
    github.com/wal-g/wal-g/pkg/storages/storage.NewObjectNotFoundError
            /home/runner/work/wal-g/wal-g/pkg/storages/storage/errors.go:15
    github.com/wal-g/wal-g/pkg/storages/fs.(*Folder).ReadObject
            /home/runner/work/wal-g/wal-g/pkg/storages/fs/folder.go:100
    github.com/wal-g/wal-g/internal.(*StorageReaderMaker).Reader
            /home/runner/work/wal-g/wal-g/internal/storage_reader_maker.go:31
    github.com/wal-g/wal-g/internal.FetchDto
            /home/runner/work/wal-g/wal-g/internal/backup.go:86
    github.com/wal-g/wal-g/internal.(*Backup).FetchMetadata
            /home/runner/work/wal-g/wal-g/internal/backup.go:72
    github.com/wal-g/wal-g/internal/databases/postgres.(*Backup).FetchMeta
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/backup.go:177
    github.com/wal-g/wal-g/internal/databases/postgres.GetPermanentBackupsAndWals
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/delete_util.go:23
    github.com/wal-g/wal-g/cmd/pg.runDeleteRetain
            /home/runner/work/wal-g/wal-g/cmd/pg/delete.go:79
    github.com/spf13/cobra.(*Command).execute
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:860
    github.com/spf13/cobra.(*Command).ExecuteC
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:974
    github.com/spf13/cobra.(*Command).Execute
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:902
    github.com/wal-g/wal-g/cmd/pg.Execute
            /home/runner/work/wal-g/wal-g/cmd/pg/pg.go:45
    main.main
            /home/runner/work/wal-g/wal-g/main/pg/main.go:8
    runtime.main
            /opt/hostedtoolcache/go/1.17.10/x64/src/runtime/proc.go:255
    runtime.goexit
            /opt/hostedtoolcache/go/1.17.10/x64/src/runtime/asm_amd64.s:1581, ignoring...
    ERROR: 2022/06/10 16:31:16.377954 failed to fetch backup meta for backup base_000000020000016B00000080_00000040 with error failed to unmarshal metadata: object '/mnt/pg_backup/basebackups_005/base_000000020000016B00000080_00000040/metadata.json' not found in storage
    github.com/wal-g/wal-g/pkg/storages/storage.NewObjectNotFoundError
            /home/runner/work/wal-g/wal-g/pkg/storages/storage/errors.go:15
    github.com/wal-g/wal-g/pkg/storages/fs.(*Folder).ReadObject
            /home/runner/work/wal-g/wal-g/pkg/storages/fs/folder.go:100
    github.com/wal-g/wal-g/internal.(*StorageReaderMaker).Reader
            /home/runner/work/wal-g/wal-g/internal/storage_reader_maker.go:31
    github.com/wal-g/wal-g/internal.FetchDto
            /home/runner/work/wal-g/wal-g/internal/backup.go:86
    github.com/wal-g/wal-g/internal.(*Backup).FetchMetadata
            /home/runner/work/wal-g/wal-g/internal/backup.go:72
    github.com/wal-g/wal-g/internal/databases/postgres.(*Backup).FetchMeta
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/backup.go:177
    github.com/wal-g/wal-g/internal/databases/postgres.GetPermanentBackupsAndWals
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/delete_util.go:23
    github.com/wal-g/wal-g/cmd/pg.runDeleteRetain
            /home/runner/work/wal-g/wal-g/cmd/pg/delete.go:79
    github.com/spf13/cobra.(*Command).execute
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:860
    github.com/spf13/cobra.(*Command).ExecuteC
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:974
    github.com/spf13/cobra.(*Command).Execute
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:902
    github.com/wal-g/wal-g/cmd/pg.Execute
            /home/runner/work/wal-g/wal-g/cmd/pg/pg.go:45
    main.main
            /home/runner/work/wal-g/wal-g/main/pg/main.go:8
    runtime.main
            /opt/hostedtoolcache/go/1.17.10/x64/src/runtime/proc.go:255
    runtime.goexit
            /opt/hostedtoolcache/go/1.17.10/x64/src/runtime/asm_amd64.s:1581, ignoring...
    ERROR: 2022/06/10 16:31:16.379419 failed to fetch backup meta for backup base_000000020000016B00000082_00000040 with error failed to unmarshal metadata: object '/mnt/pg_backup/basebackups_005/base_000000020000016B00000082_00000040/metadata.json' not found in storage
    github.com/wal-g/wal-g/pkg/storages/storage.NewObjectNotFoundError
            /home/runner/work/wal-g/wal-g/pkg/storages/storage/errors.go:15
    github.com/wal-g/wal-g/pkg/storages/fs.(*Folder).ReadObject
            /home/runner/work/wal-g/wal-g/pkg/storages/fs/folder.go:100
    github.com/wal-g/wal-g/internal.(*StorageReaderMaker).Reader
            /home/runner/work/wal-g/wal-g/internal/storage_reader_maker.go:31
    github.com/wal-g/wal-g/internal.FetchDto
            /home/runner/work/wal-g/wal-g/internal/backup.go:86
    github.com/wal-g/wal-g/internal.(*Backup).FetchMetadata
            /home/runner/work/wal-g/wal-g/internal/backup.go:72
    github.com/wal-g/wal-g/internal/databases/postgres.(*Backup).FetchMeta
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/backup.go:177
    github.com/wal-g/wal-g/internal/databases/postgres.GetPermanentBackupsAndWals
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/delete_util.go:23
    github.com/wal-g/wal-g/cmd/pg.runDeleteRetain
            /home/runner/work/wal-g/wal-g/cmd/pg/delete.go:79
    github.com/spf13/cobra.(*Command).execute
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:860
    github.com/spf13/cobra.(*Command).ExecuteC
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:974
    github.com/spf13/cobra.(*Command).Execute
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:902
    github.com/wal-g/wal-g/cmd/pg.Execute
            /home/runner/work/wal-g/wal-g/cmd/pg/pg.go:45
    main.main
            /home/runner/work/wal-g/wal-g/main/pg/main.go:8
    runtime.main
            /opt/hostedtoolcache/go/1.17.10/x64/src/runtime/proc.go:255
    runtime.goexit
            /opt/hostedtoolcache/go/1.17.10/x64/src/runtime/asm_amd64.s:1581, ignoring...
    ERROR: 2022/06/10 16:31:16.381343 object '/mnt/pg_backup/basebackups_005/base_000000020000016B0000005E_backup_stop_sentinel.json' not found in storage
    github.com/wal-g/wal-g/pkg/storages/storage.NewObjectNotFoundError
            /home/runner/work/wal-g/wal-g/pkg/storages/storage/errors.go:15
    github.com/wal-g/wal-g/pkg/storages/fs.(*Folder).ReadObject
            /home/runner/work/wal-g/wal-g/pkg/storages/fs/folder.go:100
    github.com/wal-g/wal-g/internal.(*StorageReaderMaker).Reader
            /home/runner/work/wal-g/wal-g/internal/storage_reader_maker.go:31
    github.com/wal-g/wal-g/internal.FetchDto
            /home/runner/work/wal-g/wal-g/internal/backup.go:86
    github.com/wal-g/wal-g/internal.(*Backup).FetchSentinel
            /home/runner/work/wal-g/wal-g/internal/backup.go:67
    github.com/wal-g/wal-g/internal/databases/postgres.(*Backup).GetSentinel
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/backup.go:86
    github.com/wal-g/wal-g/internal/databases/postgres.getIncrementInfo
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/delete.go:197
    github.com/wal-g/wal-g/internal/databases/postgres.makeBackupObjects
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/delete.go:109
    github.com/wal-g/wal-g/internal/databases/postgres.NewDeleteHandler
            /home/runner/work/wal-g/wal-g/internal/databases/postgres/delete.go:45
    github.com/wal-g/wal-g/cmd/pg.runDeleteRetain
            /home/runner/work/wal-g/wal-g/cmd/pg/delete.go:81
    github.com/spf13/cobra.(*Command).execute
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:860
    github.com/spf13/cobra.(*Command).ExecuteC
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:974
    github.com/spf13/cobra.(*Command).Execute
            /home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:902
    github.com/wal-g/wal-g/cmd/pg.Execute
            /home/runner/work/wal-g/wal-g/cmd/pg/pg.go:45
    main.main
            /home/runner/work/wal-g/wal-g/main/pg/main.go:8
    runtime.main
            /opt/hostedtoolcache/go/1.17.10/x64/src/runtime/proc.go:255
    runtime.goexit
            /opt/hostedtoolcache/go/1.17.10/x64/src/runtime/asm_amd64.s:1581
    
    opened by OrangeDog 18
  • Unable to create a full basebackup with backup_push

    Unable to create a full basebackup with backup_push

    I've been using wal-e to successfully create backups of a GitLab CE server to an S3 clone. I just tried to switch to wal-g, and while I managed to get it to talk to the S3 clone, it doesn't seem to upload a full backup.

    Here the command I am running:

    sudo -u gitlab-psql -s /bin/sh -c "envdir /etc/wal-g.d/env wal-g backup-push /var/opt/gitlab/postgresql/data"
    

    And below its output.

    It looks like part 1 is never finished, part 2 is omitted altogether, and only part 3 and part 4 get processed correctly, resulting in a tiny incomplete basebackups_005 folder that contains only 219kb of data.

    What am I doing wrong?

    BUCKET: gitlab.*****.com
    SERVER: wal-g
    Walking ...
    Starting part 1 ...
    
    /PG_VERSION
    /base
    /base/1
    [...]
    /base/16385/PG_VERSION
    /base/16385/pg_filenode.map
    /base/16385/pg_internal.init
    /global
    /global/1136
    [...]
    /global/pg_filenode.map
    /global/pg_internal.init
    /pg_clog
    /pg_clog/0000
    /pg_commit_ts
    /pg_dynshmem
    /pg_hba.conf
    /pg_ident.conf
    /pg_logical
    /pg_logical/mappings
    /pg_logical/snapshots
    /pg_multixact
    /pg_multixact/members
    /pg_multixact/members/0000
    /pg_multixact/offsets
    /pg_multixact/offsets/0000
    /pg_notify
    /pg_replslot
    /pg_serial
    /pg_snapshots
    /pg_stat
    /pg_stat_tmp
    /pg_subtrans
    /pg_tblspc
    /pg_twophase
    /pg_xlog
    /postgresql.auto.conf
    /postgresql.conf
    /runtime.conf
    /server.crt
    /server.key
    Starting part 3 ...
    /global/pg_control
    Finished writing part 3.
    Starting part 4 ...
    backup_label
    tablespace_map
    Finished writing part 4.
    Uploaded 4 compressed tar Files.
    
    opened by lehni 18
  • Some LZOP archives don't work

    Some LZOP archives don't work

    I'm still taking this apart but it can be related to #22.

    I have this backup that has hundreds of archives, but one of those archives causes a systematic crash, whereas lzop seems fine with it. There's not much to do but for me to go through it with a fine-tooth comb, but, FYI.

    cc @x4m

    opened by fdr 17
  • DecompressLzo: write to pipe failed

    DecompressLzo: write to pipe failed

    Versions

    CentOS 7.3 wal-g v0.1.2 wal-e 1.0.3 (creator of source basebackup)

    Problem

    Two attempts to backup-fetch a ~1TB basebackup have resulted in wal-g failing with the following stack trace:

    base/16417/12983_vm
    base/16417/27620292
    base/16417/10323582
    base/16417/10324516
    base/16417/33825612_fsm
    2017/08/29 20:07:43 DecompressLzo: write to pipe failed
    github.com/wal-g/wal-g.DecompressLzo
            /home/travis/gopath/src/github.com/wal-g/wal-g/decompress.go:126
    github.com/wal-g/wal-g.tarHandler
            /home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:66
    github.com/wal-g/wal-g.ExtractAll.func2.2
            /home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:138
    runtime.goexit
            /home/travis/.gimme/versions/go1.8.3.linux.amd64/src/runtime/asm_amd64.s:2197
    ExtractAll: lzo decompress failed
    github.com/wal-g/wal-g.tarHandler
            /home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:68
    github.com/wal-g/wal-g.ExtractAll.func2.2
            /home/travis/gopath/src/github.com/wal-g/wal-g/extract.go:138
    runtime.goexit
            /home/travis/.gimme/versions/go1.8.3.linux.amd64/src/runtime/asm_amd64.s:2197
    

    In both cases, wal-g appeared to be near the end of the restore (over 1TB of data was written to the restore directory) and failed with the same trace. After inspecting the restore and attempting to start postgres, I can confirm that the restore is indeed incomplete.

    The basebackup was taken with wal-e 1.0.3, which was also able to restore the same backup without any issues.

    opened by dgarbus 17
  • wal-g 1.10 remote backup for postgreSQL 13 misses files

    wal-g 1.10 remote backup for postgreSQL 13 misses files

    Database name

    wal-g version: 1.10 postgreSQL version: 13 (using docker image bitnami/postgresql:13.3.0-debian-10-r79 ) backup method: remote backup via postgres BASE_BACKUP protocol Storage: Google cloud storage

    Issue description

    For the same database instance, the backup created remotely is missing files and causing ERROR: could not open file "base/13397/17003": No such file or directory error after the recovery on a new server (same configuration).

    Please check the dev_basebackups_005_base_000000010000000000000762_backup_stop_sentinel.json.zip file.

    The backup did miss base/13397/17003 and only contains the following:

       "base/13397/17003.9": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:18:07Z",
          "UpdatesCount": 0
        },
        "base/13397/17003_fsm": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:18:07Z",
          "UpdatesCount": 0
        },
        "base/13397/17003_vm": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:18:07Z",
          "UpdatesCount": 0
        }
    

    On the the other hand, the backup made locally does contain base/13397/17003 and didn't cause any recovery issue.

    Please check dev_basebackups_005_base_000000010000000700000067_backup_stop_sentinel.json.zip.

    The backup contains all files the remote backup has plus missing files:

         "/base/13397/17003": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-02T05:31:27.39205458Z",
          "UpdatesCount": 0
        },
        "/base/13397/17003.1": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T09:15:05.430237269Z",
          "UpdatesCount": 0
        },
        "/base/13397/17003.2": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T09:17:01.320815142Z",
          "UpdatesCount": 0
        },
        "/base/13397/17003.3": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:14:48.615360672Z",
          "UpdatesCount": 0
        },
        "/base/13397/17003.4": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:03:31.92659065Z",
          "UpdatesCount": 0
        },
        "/base/13397/17003.5": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:14:59.786380395Z",
          "UpdatesCount": 0
        },
        "/base/13397/17003.6": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:15:57.958690788Z",
          "UpdatesCount": 0
        },
        "/base/13397/17003.7": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:16:53.951802515Z",
          "UpdatesCount": 0
        },
        "/base/13397/17003.8": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:17:50.430958611Z",
          "UpdatesCount": 0
        },
        "/base/13397/17003.9": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:18:07.706535734Z",
          "UpdatesCount": 0
        },
        "/base/13397/17003_fsm": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:18:07.946557647Z",
          "UpdatesCount": 0
        },
        "/base/13397/17003_vm": {
          "IsIncremented": false,
          "IsSkipped": false,
          "MTime": "2021-10-01T10:18:07.723537286Z",
          "UpdatesCount": 0
        }
    

    Extra Info

    Recreate backup remotely will produce the same result --- still miss the same file. Only locally backup can fix the issue.

    opened by t83714 15
  • Change error to warning when creating symlink, that already exists

    Change error to warning when creating symlink, that already exists

    Database name

    PostgreSQL

    Pull request description

    Fix error when fetching incremental backup of posgresql database, that have multiple table spaces.

    Describe what this PR fix

    wal-g trying to create symlink for additional tablespace on every part of backup and exit witch error

    Also see https://github.com/wal-g/wal-g/issues/585 https://github.com/wal-g/wal-g/issues/499

    Please provide steps to reproduce (if it's a bug)

    create new postgres database create additional table space push-backup fetch-backup

    opened by IncubusRK 2
  • wal-fetch with error

    wal-fetch with error

    WAL-G compress and push/pull wals to/from S3 backet with defaults/LZ4 without problem

    When WALG_COMPRESSION_METHOD seted to brotli gives some errors

    .walg.json

    {
        "WALG_S3_PREFIX": "s3://any",
        "AWS_ENDPOINT": "http://any",
        "AWS_ACCESS_KEY_ID": "any",
        "AWS_SECRET_ACCESS_KEY": "any",
        "AWS_REGION": "eu-central-1",
        "WALG_COMPRESSION_METHOD": "brotli",
        "WALG_DELTA_MAX_STEPS": "5",
        "PGDATA": "/home/postgres/pgdata/pgroot/data",
        "PGHOST": "/var/run/postgresql/.s.PGSQL.5432"
    }
    

    wal-g push-wal works without problems

    backet contain

    [2022-07-26 09:51:50 MSK] 3.1MiB STANDARD 000000010000C72E00000065.br
    [2022-07-26 09:51:51 MSK] 2.9MiB STANDARD 000000010000C72E00000066.br
    [2022-07-26 09:51:50 MSK] 2.9MiB STANDARD 000000010000C72E00000067.br
    [2022-07-26 09:51:51 MSK] 3.1MiB STANDARD 000000010000C72E00000068.br
    [2022-07-26 09:51:51 MSK] 3.0MiB STANDARD 000000010000C72E00000069.br
    [2022-07-26 09:51:51 MSK] 2.9MiB STANDARD 000000010000C72E0000006A.br
    [2022-07-26 09:51:51 MSK] 2.8MiB STANDARD 000000010000C72E0000006B.br
    [2022-07-26 09:51:52 MSK] 3.2MiB STANDARD 000000010000C72E0000006C.br
    [2022-07-26 09:52:07 MSK] 5.0MiB STANDARD 000000010000C72E0000006D.br
    [2022-07-26 11:19:10 MSK] 6.6MiB STANDARD 000000010000C72E0000006E.br
    [2022-07-26 14:19:10 MSK] 8.6MiB STANDARD 000000010000C72E0000006F.br
    

    and when try to fetch i gives error

     WALG_LOG_LEVEL=DEVEL /tmp/wal-g-pg-ubuntu-20.04-amd64 --config=./.walg.json wal-fetch 000000010000C72E0000006E /tmp/000000010000C72E0000006E
    

    try to fetch wal with lz4 extension

    ERROR: 2022/07/26 14:48:05.136620 NoSuchBucket: 
    	status code: 404, request id: tx000000000000000249906-0062dfd474-281da2b-m9, host id: 
    failed to read object: 'wal_005/000000010000C72E0000006E.lz4' from S3
    github.com/wal-g/wal-g/pkg/storages/s3.(*Folder).ReadObject
    	/home/runner/work/wal-g/wal-g/pkg/storages/s3/folder.go:192
    
    /tmp/wal-g-pg-ubuntu-20.04-amd64 --version 
    wal-g version v2.0.0	1eb88a5	2022.05.20_10:45:57	PostgreSQL
    
    DEBUG: 2022/07/26 14:48:04.047384 --- COMPILED ENVIRONMENT VARS ---
    DEBUG: 2022/07/26 14:48:04.047431 AWS_ACCESS_KEY=
    DEBUG: 2022/07/26 14:48:04.047436 AWS_ACCESS_KEY_ID=****************
    DEBUG: 2022/07/26 14:48:04.047440 AWS_CA_BUNDLE=
    DEBUG: 2022/07/26 14:48:04.047443 AWS_CONFIG_FILE=
    DEBUG: 2022/07/26 14:48:04.047447 AWS_DEFAULT_OUTPUT=
    DEBUG: 2022/07/26 14:48:04.047452 AWS_DEFAULT_REGION=
    DEBUG: 2022/07/26 14:48:04.047455 AWS_ENDPOINT=http://***************
    DEBUG: 2022/07/26 14:48:04.047459 AWS_PROFILE=
    DEBUG: 2022/07/26 14:48:04.047461 AWS_REGION=eu-central-1
    DEBUG: 2022/07/26 14:48:04.047464 AWS_ROLE_SESSION_NAME=
    DEBUG: 2022/07/26 14:48:04.047467 AWS_S3_FORCE_PATH_STYLE=
    DEBUG: 2022/07/26 14:48:04.047470 AWS_SECRET_ACCESS_KEY=********************
    DEBUG: 2022/07/26 14:48:04.047474 AWS_SECRET_KEY=
    DEBUG: 2022/07/26 14:48:04.047477 AWS_SESSION_TOKEN=
    DEBUG: 2022/07/26 14:48:04.047480 AWS_SHARED_CREDENTIALS_FILE=
    DEBUG: 2022/07/26 14:48:04.047484 AZURE_BUFFER_SIZE=
    DEBUG: 2022/07/26 14:48:04.047488 AZURE_ENDPOINT_SUFFIX=
    DEBUG: 2022/07/26 14:48:04.047491 AZURE_ENVIRONMENT_NAME=
    DEBUG: 2022/07/26 14:48:04.047495 AZURE_MAX_BUFFERS=
    DEBUG: 2022/07/26 14:48:04.047499 AZURE_STORAGE_ACCESS_KEY=
    DEBUG: 2022/07/26 14:48:04.047502 AZURE_STORAGE_ACCOUNT=
    DEBUG: 2022/07/26 14:48:04.047505 AZURE_STORAGE_SAS_TOKEN=
    DEBUG: 2022/07/26 14:48:04.047509 COLORTERM=truecolor
    DEBUG: 2022/07/26 14:48:04.047511 DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1002/bus
    DEBUG: 2022/07/26 14:48:04.047515 DEFAULTS_PATH=/usr/share/gconf/gnome-flashback-metacity.default.path
    DEBUG: 2022/07/26 14:48:04.047518 DESKTOP_SESSION=gnome-flashback-metacity
    DEBUG: 2022/07/26 14:48:04.047522 DISPLAY=:1
    DEBUG: 2022/07/26 14:48:04.047525 GCS_CONTEXT_TIMEOUT=
    DEBUG: 2022/07/26 14:48:04.047530 GCS_ENCRYPTION_KEY=
    DEBUG: 2022/07/26 14:48:04.047534 GCS_MAX_CHUNK_SIZE=
    DEBUG: 2022/07/26 14:48:04.047538 GCS_MAX_RETRIES=
    DEBUG: 2022/07/26 14:48:04.047541 GCS_NORMALIZE_PREFIX=
    DEBUG: 2022/07/26 14:48:04.047544 GDMSESSION=gnome-flashback-metacity
    DEBUG: 2022/07/26 14:48:04.047552 GIO_LAUNCHED_DESKTOP_FILE_PID=7456
    DEBUG: 2022/07/26 14:48:04.047555 GNOME_DESKTOP_SESSION_ID=this-is-deprecated
    DEBUG: 2022/07/26 14:48:04.047559 GOMAXPROCS=
    DEBUG: 2022/07/26 14:48:04.047562 GOOGLE_APPLICATION_CREDENTIALS=
    DEBUG: 2022/07/26 14:48:04.047565 GPG_AGENT_INFO=/run/user/1002/gnupg/S.gpg-agent:0:1
    DEBUG: 2022/07/26 14:48:04.047568 GTK_MODULES=gail:atk-bridge
    DEBUG: 2022/07/26 14:48:04.047575 HTTP_EXPOSE_EXPVAR=
    DEBUG: 2022/07/26 14:48:04.047578 HTTP_EXPOSE_PPROF=
    DEBUG: 2022/07/26 14:48:04.047581 HTTP_LISTEN=
    DEBUG: 2022/07/26 14:48:04.047585 IM_CONFIG_PHASE=1
    DEBUG: 2022/07/26 14:48:04.047591 JOURNAL_STREAM=9:51750
    DEBUG: 2022/07/26 14:48:04.047594 LANG=ru_RU.UTF-8
    DEBUG: 2022/07/26 14:48:04.047597 LESSCLOSE=/usr/bin/lesspipe %s %s
    DEBUG: 2022/07/26 14:48:04.047601 LESSOPEN=| /usr/bin/lesspipe %s
    DEBUG: 2022/07/26 14:48:04.047608 LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
    DEBUG: 2022/07/26 14:48:04.047618 MANAGERPID=2303
    DEBUG: 2022/07/26 14:48:04.047621 MANDATORY_PATH=/usr/share/gconf/gnome-flashback-metacity.mandatory.path
    DEBUG: 2022/07/26 14:48:04.047628 OS_AUTH_URL=
    DEBUG: 2022/07/26 14:48:04.047631 OS_PASSWORD=
    DEBUG: 2022/07/26 14:48:04.047634 OS_REGION_NAME=
    DEBUG: 2022/07/26 14:48:04.047637 OS_TENANT_NAME=
    DEBUG: 2022/07/26 14:48:04.047641 OS_USERNAME=
    DEBUG: 2022/07/26 14:48:04.047648 PGBACKREST_STANZA=main
    DEBUG: 2022/07/26 14:48:04.047651 PGDATA=/home/postgres/pgdata/pgroot/data
    DEBUG: 2022/07/26 14:48:04.047654 PGDATABASE=
    DEBUG: 2022/07/26 14:48:04.047657 PGHOST=/var/run/postgresql/.s.PGSQL.5432
    DEBUG: 2022/07/26 14:48:04.047660 PGPASSFILE=
    DEBUG: 2022/07/26 14:48:04.047664 PGPASSWORD=
    DEBUG: 2022/07/26 14:48:04.047667 PGPORT=
    DEBUG: 2022/07/26 14:48:04.047670 PGSSLMODE=
    DEBUG: 2022/07/26 14:48:04.047673 PGUSER=
    DEBUG: 2022/07/26 14:48:04.047676 PG_READY_RENAME=
    DEBUG: 2022/07/26 14:48:04.047680 PROFILE_MODE=
    DEBUG: 2022/07/26 14:48:04.047683 PROFILE_PATH=
    DEBUG: 2022/07/26 14:48:04.047686 PROFILE_SAMPLING_RATIO=
    DEBUG: 2022/07/26 14:48:04.047693 QT_ACCESSIBILITY=1
    DEBUG: 2022/07/26 14:48:04.047696 QT_IM_MODULE=ibus
    DEBUG: 2022/07/26 14:48:04.047699 S3_CA_CERT_FILE=
    DEBUG: 2022/07/26 14:48:04.047702 S3_ENDPOINT_PORT=
    DEBUG: 2022/07/26 14:48:04.047705 S3_ENDPOINT_SOURCE=
    DEBUG: 2022/07/26 14:48:04.047708 S3_LOG_LEVEL=
    DEBUG: 2022/07/26 14:48:04.047711 S3_MAX_PART_SIZE=
    DEBUG: 2022/07/26 14:48:04.047714 S3_MAX_RETRIES=
    DEBUG: 2022/07/26 14:48:04.047718 S3_RANGE_BATCH_ENABLED=
    DEBUG: 2022/07/26 14:48:04.047721 S3_RANGE_MAX_RETRIES=
    DEBUG: 2022/07/26 14:48:04.047724 S3_SSE=
    DEBUG: 2022/07/26 14:48:04.047727 S3_SSE_C=
    DEBUG: 2022/07/26 14:48:04.047730 S3_SSE_KMS_ID=
    DEBUG: 2022/07/26 14:48:04.047733 S3_STORAGE_CLASS=
    DEBUG: 2022/07/26 14:48:04.047736 S3_USE_LIST_OBJECTS_V1=
    DEBUG: 2022/07/26 14:48:04.047740 S3_USE_YC_SESSION_TOKEN=
    DEBUG: 2022/07/26 14:48:04.047743 SESSION_MANAGER=local/gbook:@/tmp/.ICE-unix/2613,unix/gbook:/tmp/.ICE-unix/2613
    DEBUG: 2022/07/26 14:48:04.047746 SHELL=/bin/bash
    DEBUG: 2022/07/26 14:48:04.047749 SHLVL=1
    DEBUG: 2022/07/26 14:48:04.047752 SSH_AGENT_PID=2541
    DEBUG: 2022/07/26 14:48:04.047756 SSH_AUTH_SOCK=/run/user/1002/keyring/ssh
    DEBUG: 2022/07/26 14:48:04.047759 SSH_PASSWORD=
    DEBUG: 2022/07/26 14:48:04.047762 SSH_PORT=
    DEBUG: 2022/07/26 14:48:04.047765 SSH_PRIVATE_KEY_PATH=
    DEBUG: 2022/07/26 14:48:04.047768 SSH_USERNAME=
    DEBUG: 2022/07/26 14:48:04.047771 TERM=xterm-256color
    DEBUG: 2022/07/26 14:48:04.047774 TERMINATOR_DBUS_NAME=net.tenshu.Terminator23558193cd9818af7fe4d2c2f5bd9d00f
    DEBUG: 2022/07/26 14:48:04.047778 TERMINATOR_DBUS_PATH=/net/tenshu/Terminator2
    DEBUG: 2022/07/26 14:48:04.047781 TERMINATOR_UUID=urn:uuid:42cc6f87-0f16-4f89-b7c4-88abf7df2685
    DEBUG: 2022/07/26 14:48:04.047785 TOTAL_BG_UPLOADED_LIMIT=32
    DEBUG: 2022/07/26 14:48:04.047788 UPLOAD_CONCURRENCY=
    DEBUG: 2022/07/26 14:48:04.047797 VTE_VERSION=6003
    DEBUG: 2022/07/26 14:48:04.047800 WALE_GPG_KEY_ID=
    DEBUG: 2022/07/26 14:48:04.047803 WALE_S3_PREFIX=
    DEBUG: 2022/07/26 14:48:04.047806 WALG_ALIVE_CHECK_INTERVAL=
    DEBUG: 2022/07/26 14:48:04.047810 WALG_AZURE_BUFFER_SIZE=
    DEBUG: 2022/07/26 14:48:04.047813 WALG_AZURE_MAX_BUFFERS=
    DEBUG: 2022/07/26 14:48:04.047816 WALG_AZ_PREFIX=
    DEBUG: 2022/07/26 14:48:04.047819 WALG_COMPRESSION_METHOD=brotli
    DEBUG: 2022/07/26 14:48:04.047822 WALG_CSE_KMS_ID=
    DEBUG: 2022/07/26 14:48:04.047825 WALG_CSE_KMS_REGION=
    DEBUG: 2022/07/26 14:48:04.047828 WALG_DELTA_FROM_NAME=
    DEBUG: 2022/07/26 14:48:04.047832 WALG_DELTA_FROM_USER_DATA=
    DEBUG: 2022/07/26 14:48:04.047835 WALG_DELTA_MAX_STEPS=5
    DEBUG: 2022/07/26 14:48:04.047838 WALG_DELTA_ORIGIN=
    DEBUG: 2022/07/26 14:48:04.047841 WALG_DISK_RATE_LIMIT=
    DEBUG: 2022/07/26 14:48:04.047844 WALG_DOWNLOAD_CONCURRENCY=10
    DEBUG: 2022/07/26 14:48:04.047847 WALG_FETCH_TARGET_USER_DATA=
    DEBUG: 2022/07/26 14:48:04.047850 WALG_FILE_PREFIX=
    DEBUG: 2022/07/26 14:48:04.047854 WALG_GPG_KEY_ID=
    DEBUG: 2022/07/26 14:48:04.047857 WALG_GS_PREFIX=
    DEBUG: 2022/07/26 14:48:04.047860 WALG_INTEGRITY_MAX_DELAYED_WALS=0
    DEBUG: 2022/07/26 14:48:04.047863 WALG_LIBSODIUM_KEY=
    DEBUG: 2022/07/26 14:48:04.047866 WALG_LIBSODIUM_KEY_PATH=
    DEBUG: 2022/07/26 14:48:04.047869 WALG_LIBSODIUM_KEY_TRANSFORM=none
    DEBUG: 2022/07/26 14:48:04.047873 WALG_LOG_LEVEL=DEVEL
    DEBUG: 2022/07/26 14:48:04.047876 WALG_NETWORK_RATE_LIMIT=
    DEBUG: 2022/07/26 14:48:04.047879 WALG_PGP_KEY=
    DEBUG: 2022/07/26 14:48:04.047882 WALG_PGP_KEY_PASSPHRASE=
    DEBUG: 2022/07/26 14:48:04.047885 WALG_PGP_KEY_PATH=
    DEBUG: 2022/07/26 14:48:04.047888 WALG_PG_WAL_SIZE=16
    DEBUG: 2022/07/26 14:48:04.047891 WALG_PREFETCH_DIR=
    DEBUG: 2022/07/26 14:48:04.047895 WALG_PREVENT_WAL_OVERWRITE=false
    DEBUG: 2022/07/26 14:48:04.047898 WALG_S3_CA_CERT_FILE=
    DEBUG: 2022/07/26 14:48:04.047901 WALG_S3_MAX_PART_SIZE=
    DEBUG: 2022/07/26 14:48:04.047904 WALG_S3_PREFIX=s3://*********
    DEBUG: 2022/07/26 14:48:04.047907 WALG_S3_SSE=
    DEBUG: 2022/07/26 14:48:04.047910 WALG_S3_SSE_C=
    DEBUG: 2022/07/26 14:48:04.047913 WALG_S3_SSE_KMS_ID=
    DEBUG: 2022/07/26 14:48:04.047916 WALG_S3_STORAGE_CLASS=
    DEBUG: 2022/07/26 14:48:04.047920 WALG_SENTINEL_USER_DATA=
    DEBUG: 2022/07/26 14:48:04.047923 WALG_SERIALIZER_TYPE=json_default
    DEBUG: 2022/07/26 14:48:04.047926 WALG_SKIP_REDUNDANT_TARS=false
    DEBUG: 2022/07/26 14:48:04.047930 WALG_SLOTNAME=
    DEBUG: 2022/07/26 14:48:04.047933 WALG_SSH_PREFIX=
    DEBUG: 2022/07/26 14:48:04.047936 WALG_STATSD_ADDRESS=
    DEBUG: 2022/07/26 14:48:04.047939 WALG_STOP_BACKUP_TIMEOUT=
    DEBUG: 2022/07/26 14:48:04.047942 WALG_STORAGE_PREFIX=
    DEBUG: 2022/07/26 14:48:04.047945 WALG_STORE_ALL_CORRUPT_BLOCKS=false
    DEBUG: 2022/07/26 14:48:04.047948 WALG_STREAM_CREATE_COMMAND=
    DEBUG: 2022/07/26 14:48:04.047952 WALG_STREAM_RESTORE_COMMAND=
    DEBUG: 2022/07/26 14:48:04.047955 WALG_SWIFT_PREFIX=
    DEBUG: 2022/07/26 14:48:04.047958 WALG_TAR_DISABLE_FSYNC=false
    DEBUG: 2022/07/26 14:48:04.047961 WALG_TAR_SIZE_THRESHOLD=1073741823
    DEBUG: 2022/07/26 14:48:04.047964 WALG_UPLOAD_CONCURRENCY=16
    DEBUG: 2022/07/26 14:48:04.047967 WALG_UPLOAD_DISK_CONCURRENCY=1
    DEBUG: 2022/07/26 14:48:04.047971 WALG_UPLOAD_QUEUE=2
    DEBUG: 2022/07/26 14:48:04.047974 WALG_UPLOAD_WAL_METADATA=NOMETADATA
    DEBUG: 2022/07/26 14:48:04.047977 WALG_USE_COPY_COMPOSER=false
    DEBUG: 2022/07/26 14:48:04.047980 WALG_USE_RATING_COMPOSER=false
    DEBUG: 2022/07/26 14:48:04.047983 WALG_USE_REVERSE_UNPACK=false
    DEBUG: 2022/07/26 14:48:04.047986 WALG_USE_WAL_DELTA=false
    DEBUG: 2022/07/26 14:48:04.047990 WALG_VERIFY_PAGE_CHECKSUMS=false
    DEBUG: 2022/07/26 14:48:04.047993 WALG_WITHOUT_FILES_METADATA=false
    DEBUG: 2022/07/26 14:48:04.047996 WINDOWPATH=2
    DEBUG: 2022/07/26 14:48:04.047999 XAUTHORITY=/run/user/1002/gdm/Xauthority
    DEBUG: 2022/07/26 14:48:04.048002 XDG_CONFIG_DIRS=/etc/xdg/xdg-gnome-flashback-metacity:/etc/xdg
    DEBUG: 2022/07/26 14:48:04.048006 XDG_CURRENT_DESKTOP=GNOME-Flashback:GNOME
    DEBUG: 2022/07/26 14:48:04.048009 XDG_DATA_DIRS=/usr/share/gnome-flashback-metacity:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop
    DEBUG: 2022/07/26 14:48:04.048013 XDG_MENU_PREFIX=gnome-
    DEBUG: 2022/07/26 14:48:04.048016 XDG_RUNTIME_DIR=/run/user/1002
    DEBUG: 2022/07/26 14:48:04.048019 XDG_SESSION_CLASS=user
    DEBUG: 2022/07/26 14:48:04.048022 XDG_SESSION_DESKTOP=gnome-flashback-metacity
    DEBUG: 2022/07/26 14:48:04.048025 XDG_SESSION_TYPE=x11
    DEBUG: 2022/07/26 14:48:04.048029 [email protected]=ibus
    DEBUG: 2022/07/26 14:48:04.048032 YC_CSE_KMS_KEY_ID=
    DEBUG: 2022/07/26 14:48:04.048035 YC_SERVICE_ACCOUNT_KEY_FILE=
    DEBUG: 2022/07/26 14:48:04.048038 _=/tmp/wal-g-pg-ubuntu-20.04-amd64
    DEBUG: 2022/07/26 14:48:04.048277 HandleWALFetch(folder, 000000010000C72E0000006E, /tmp/000000010000C72E0000006E, true)
    ERROR: 2022/07/26 14:48:05.136620 NoSuchBucket: 
    	status code: 404, request id: tx000000000000000249906-0062dfd474-281da2b-m9, host id: 
    failed to read object: 'wal_005/000000010000C72E0000006E.lz4' from S3
    github.com/wal-g/wal-g/pkg/storages/s3.(*Folder).ReadObject
    	/home/runner/work/wal-g/wal-g/pkg/storages/s3/folder.go:192
    github.com/wal-g/wal-g/internal.TryDownloadFile
    	/home/runner/work/wal-g/wal-g/internal/fetch_helper.go:61
    github.com/wal-g/wal-g/internal.findDecompressorAndDownload
    	/home/runner/work/wal-g/wal-g/internal/fetch_helper.go:190
    github.com/wal-g/wal-g/internal.DownloadAndDecompressStorageFile
    	/home/runner/work/wal-g/wal-g/internal/fetch_helper.go:171
    github.com/wal-g/wal-g/internal.DownloadFileTo
    	/home/runner/work/wal-g/wal-g/internal/fetch_helper.go:223
    github.com/wal-g/wal-g/internal/databases/postgres.HandleWALFetch
    	/home/runner/work/wal-g/wal-g/internal/databases/postgres/wal_fetch_handler.go:101
    github.com/wal-g/wal-g/cmd/pg.glob..func14
    	/home/runner/work/wal-g/wal-g/cmd/pg/wal_fetch.go:20
    github.com/spf13/cobra.(*Command).execute
    	/home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:860
    github.com/spf13/cobra.(*Command).ExecuteC
    	/home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:974
    github.com/spf13/cobra.(*Command).Execute
    	/home/runner/work/wal-g/wal-g/vendor/github.com/spf13/cobra/command.go:902
    github.com/wal-g/wal-g/cmd/pg.Execute
    	/home/runner/work/wal-g/wal-g/cmd/pg/pg.go:45
    main.main
    	/home/runner/work/wal-g/wal-g/main/pg/main.go:8
    runtime.main
    	/opt/hostedtoolcache/go/1.17.10/x64/src/runtime/proc.go:255
    runtime.goexit
    	/opt/hostedtoolcache/go/1.17.10/x64/src/runtime/asm_amd64.s:1581
    
    
    opened by nevlkv 1
  • Allow point-in-time-restores of MySQL servers with encrypted binlogs

    Allow point-in-time-restores of MySQL servers with encrypted binlogs

    Database name

    MySQL

    Pull request description

    Describe what this PR fixes

    mysqlbinlog cannot use MySQL keyring plugins, which makes it impossible for it to read encrypted MySQL binary logs (using either encrypt_binlog=ON in Percona Server 5.7 or binlog_encryption=ON in MySQL 8.0). Though there's a python script out there that can decrypt binary logs created by MySQL 8.0, no tools are available to decrypt binlogs created by Percona Server 5.7 (or early versions of Percona Server 8.0). Since mysqlbinlog cannot decrypt encrypted binlogs on its own, there's currently no way to use WAL-G perform a point-in-time-restore if the MySQL server is using encrypted binlogs.

    However, there is a workaround: MySQL servers using binlog encryption send the decrypted binlog to their replicas as part of MySQL replication. Likewise mysqlbinlog --raw --read-from-remote-server also fetches the decrypted binlog from an active server (the catch is that it saves the decrypted binlog to the working directory). This PR adds the ability to have WAL-G directly read unencrypted MySQL binlogs from a remote server the same way mysqlbinlog --raw --read-from-remote-server does without saving it to disk (the binary log still gets encrypted by WAL-G before sending it to cloud storage). The new option to read decrypted binary logs from the server is WALG_MYSQL_BINLOG_READ_FROM_REMOTE_SERVER. This lets us perform PITR restores for MySQL servers with encrypted binlogs as normal.

    I also did a general documentation update for MySQL and documented how to perform backup and restores of encrypted tables as well as use the new WALG_STREAM_SPLITTER_PARTITIONS feature.

    Please provide steps to test this PR

    Try encrypting some tables with your favorite MySQL keyring plugin (keyring_file is the easiest to setup) and use the documentation in this PR to perform a backup and restore of those encrypted tables. You can try deleting the keyring file after taking a backup to prove that the instructions still work even if the original keyring has been lost.

    I have personally tested this on MySQL 8.0 and Percona Server 5.7.

    opened by jstaf 1
  • DecryptAndDecompressTar: failed to create new reader

    DecryptAndDecompressTar: failed to create new reader

    Hardware and software parameters

    Debian GNU/Linux 8.5 (jessie) PostgreSQL 9.6 wal-g version v1.0 2611d11 2021.05.31_12:37:24 PostgreSQL

    Issue description

    My problem

    When trying to restore a cluster, I get a broken cluster and an error in the log. The file referenced by the wal-g is present at the source S3 storage.

    Steps to reproduce

    Run: wal-g backup-fetch /mnt/RAMDB/ base_0000000100000C1E000000D4

    Config and wal-g stdout

    WALE_S3_PREFIX="s3://my-bucket/my-pg-61"
    PGHOST=localhost
    PGPORT=5432
    AWS_ACCESS_KEY_ID="ACCESS_KEY_ID"
    AWS_SECRET_ACCESS_KEY="ACCESS_KEY"
    AWS_ENDPOINT="http://10.10.10.10:7480"
    AWS_S3_FORCE_PATH_STYLE="true"
    WALG_UPLOAD_CONCURRENCY=200
    WALG_UPLOAD_DISK_CONCURRENCY=150
    

    Part of restore log:

    INFO: 2022/07/21 19:13:12.561706 Finished decompression of part_115.tar.lz4
    INFO: 2022/07/21 19:13:12.561732 Finished extraction of part_115.tar.lz4
    ERROR: 2022/07/21 19:13:12.561756 part_115.tar.lz4 DecryptAndDecompressTar: failed to create new reader: object 'my-pg-61/basebackups_005/base_0000000100000C1E000000D4/tar_partitions/part_115.tar.lz4' not found in storage
    
    opened by su-user 0
  • Statically compile wal-g using docker

    Statically compile wal-g using docker

    Database name

    All databases! 🙂

    Pull request description

    This PR allows compiling a statically-linked wal-g binary, compresses the binary using upx, and changes the wal-g/golang Dockerfile to Alpine Linux. All binaries are now smaller (10-12MB instead of >40MB). The static binary has the important advantage over the normal binary in that it can be used on any Linux system without recompilation.

    The static build is not the new default (you can still compile dynamically-linked binaries as before). To compile a statically linked wal-g, set export USE_STATIC_BUILD=1 before compiling:

    # compile a statically linked binary for MySQL
    export USE_LIBSODIUM=1
    export USE_STATIC_BUILD=1
    docker build -t wal-g/golang -f docker/golang/Dockerfile .
    make deps
    make mysql_build
    

    Describe what this PR fix

    Currently wal-g binaries are not portable between different Linux distributions or versions because it relies on linking against libsodium and brotli. By statically linking instead, the resulting binary can be used on any Linux system. Fixes https://github.com/wal-g/wal-g/issues/300.

    Known issues

    WAL-G linked against musl's libc will fail to read users managed by services like FreeIPA/sssd/GCP OSLogin. This is because musl doesn't properly support NSS. Running wal-g with a user created by sssd/OSLogin/etc. will result in the error below:

    $ wal-g backup-list
    ERROR: 2022/07/14 17:16:17.325247 user: unknown userid 1092191234
    

    However I ultimately feel this error is a non-issue: you're usually going to be using sudo to execute wal-g as a local UNIX user anyways (and these work just fine).

    Credit for this PR mostly goes to @halsbox who came up with the idea to do this here: https://github.com/wal-g/wal-g/issues/300#issuecomment-1166289794

    I've only really tested this for the MySQL build.

    opened by jstaf 4
  • PostgreSQL backup from replica: Invalid page header encountered: blockNo

    PostgreSQL backup from replica: Invalid page header encountered: blockNo

    Database name

    PostgreSQL 14.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4), 64-bit

    Issue description

    PostgreSQL backup from replica: Invalid page header encountered: blockNo

    Describe your problem

    I run backup-push on replica and always get many "Invalid page header encountered: blockNo". backup-push on primary run without error. pg_checksums did not find corruption on replica server. pg_amcheck did not find corruption.

    I restored backup on test server and pg_checksums or pg_amcheck did not find corruption.

    Please provide steps to reproduce

    /opt/wal-g/wal-g backup-push -v /pg_data/14/main INFO: 2022/07/01 15:56:11.253469 LATEST backup is: 'base_0000001D000000240000000F_D_0000001D000000230000007E' INFO: 2022/07/01 15:56:11.606852 Reached max delta steps. Doing full backup. INFO: 2022/07/01 15:56:11.613534 Calling pg_start_backup() INFO: 2022/07/01 15:56:11.704130 Starting a new tar bundle INFO: 2022/07/01 15:56:11.704164 Walking ... INFO: 2022/07/01 15:56:11.704785 Starting part 1 ... WARNING: 2022/07/01 15:56:24.944405 Invalid page header encountered: blockNo 11233, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 15:56:24.944429 Invalid page header encountered: blockNo 11234, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 15:56:25.950540 Invalid page header encountered: blockNo 2536, path /pg_data/14/main/base/16387/1082641 WARNING: 2022/07/01 15:56:26.484217 Invalid page header encountered: blockNo 4471, path /pg_data/14/main/base/16387/1082642 INFO: 2022/07/01 15:56:37.206544 Finished writing part 1.

    Please add config and wal-g stdout/stderr logs for debug purpose

    /opt/wal-g/wal-g backup-push -v --walg-log-level DEVEL /pg_data/14/main

    .walg.json { "WALG_S3_PREFIX": "s3://wal-g", "AWS_ENDPOINT": "https://:9000", "AWS_S3_FORCE_PATH_STYLE": "true", "AWS_ACCESS_KEY_ID": "", "AWS_SECRET_ACCESS_KEY": "", "WALG_COMPRESSION_METHOD": "brotli", "WALG_DELTA_MAX_STEPS": "5", "WALG_TAR_SIZE_THRESHOLD": "2147483648", "WALG_UPLOAD_CONCURRENCY": "4", "WALG_DOWNLOAD_CONCURRENCY": "4", "WALG_VERIFY_PAGE_CHECKSUMS": "true", "PGDATA": "/pg_data/14/main", "PGHOST": "/var/run/postgresql/" }

    If you can, provide logs

    DEBUG: 2022/07/01 16:07:12.793042 Skipped due to unchanged modification time: /pg_data/14/main/base/16387/1069831 DEBUG: 2022/07/01 16:07:12.793046 /base/16387/1082632 DEBUG: 2022/07/01 16:07:12.793074 /base/16387/1082632_fsm WARNING: 2022/07/01 16:07:12.863516 Invalid page header encountered: blockNo 11377, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.863539 Invalid page header encountered: blockNo 11378, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.863743 Invalid page header encountered: blockNo 11379, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.863757 Invalid page header encountered: blockNo 11380, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.863766 Invalid page header encountered: blockNo 11381, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.863777 Invalid page header encountered: blockNo 11382, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.863871 Invalid page header encountered: blockNo 11383, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.863893 Invalid page header encountered: blockNo 11384, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.863910 Invalid page header encountered: blockNo 11385, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.863918 Invalid page header encountered: blockNo 11386, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.864025 Invalid page header encountered: blockNo 11387, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.864037 Invalid page header encountered: blockNo 11388, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.864044 Invalid page header encountered: blockNo 11389, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.864050 Invalid page header encountered: blockNo 11390, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.864139 Invalid page header encountered: blockNo 11391, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.864158 Invalid page header encountered: blockNo 11392, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.864165 Invalid page header encountered: blockNo 11393, path /pg_data/14/main/base/16387/1082632 WARNING: 2022/07/01 16:07:12.864224 Invalid page header encountered: blockNo 11394, path /pg_data/14/main/base/16387/1082632 DEBUG: 2022/07/01 16:07:12.864230 verifyPageBlocks: /pg_data/14/main/base/16387/1082632, checked 283 blocks, found 0 corrupt DEBUG: 2022/07/01 16:07:12.864275 /base/16387/1082632_vm DEBUG: 2022/07/01 16:07:12.864517 /base/16387/1082639 DEBUG: 2022/07/01 16:07:12.864586 /base/16387/1082641

    opened by AlexKir 3
Releases(v2.0.0)
Support MySQL or MariaDB for gopsql/psql and gopsql/db

mysql Support MySQL or MariaDB for github.com/gopsql/psql. You can make MySQL SELECT, INSERT, UPDATE, DELETE statements with this package. NOTE: Pleas

null 0 Dec 9, 2021
Redis-shake is a tool for synchronizing data between two redis databases. Redis-shake是一个用于在两个redis之间同步数据的工具,满足用户非常灵活的同步、迁移需求。

RedisShake is mainly used to synchronize data from one redis to another. Thanks to the Douyu's WSD team for the support. 中文文档 English tutorial 中文使用文档

Alibaba 2.5k Aug 9, 2022
Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client,

Devcloud-go Devcloud-go provides a sql-driver for mysql which named devspore driver and a redis client which named devspore client, you can use them w

HUAWEI CLOUD 11 Jun 9, 2022
A tool to run queries in defined frequency and expose the count as prometheus metrics. Supports MongoDB and SQL

query2metric A tool to run db queries in defined frequency and expose the count as prometheus metrics. Why ? Product metrics play an important role in

S Santhosh Nagaraj 19 Jul 1, 2022
Bifrost ---- 面向生产环境的 MySQL 同步到Redis,MongoDB,ClickHouse,MySQL等服务的异构中间件

Bifrost ---- 面向生产环境的 MySQL 同步到Redis,ClickHouse等服务的异构中间件 English 漫威里的彩虹桥可以将 雷神 送到 阿斯加德 和 地球 而这个 Bifrost 可以将 你 MySQL 里的数据 全量 , 实时的同步到 : Redis MongoDB Cl

brokerCAP 1.3k Aug 11, 2022
Go-Postgresql-Query-Builder - A query builder for Postgresql in Go

Postgresql Query Builder for Go This query builder aims to make complex queries

Samuel Banks 4 May 24, 2022
Interactive client for PostgreSQL and MySQL

dblab Interactive client for PostgreSQL and MySQL. Overview dblab is a fast and lightweight interactive terminal based UI application for PostgreSQL a

Daniel Omar Vergara Pérez 223 Aug 10, 2022
Interactive terminal user interface and CLI for database connections. MySQL, PostgreSQL. More to come.

?? dbui dbui is the terminal user interface and CLI for database connections. It provides features like, Connect to multiple data sources and instance

Kanan Rahimov 97 Aug 9, 2022
Dumpling is a fast, easy-to-use tool written by Go for dumping data from the database(MySQL, TiDB...) to local/cloud(S3, GCP...) in multifarious formats(SQL, CSV...).

?? Dumpling Dumpling is a tool and a Go library for creating SQL dump from a MySQL-compatible database. It is intended to replace mysqldump and mydump

PingCAP 261 Jul 14, 2022
mysql to mysql 轻量级多线程的库表数据同步

goMysqlSync golang mysql to mysql 轻量级多线程库表级数据同步 测试运行 设置当前binlog位置并且开始运行 go run main.go -position mysql-bin.000001 1 1619431429 查询当前binlog位置,参数n为秒数,查询结

null 14 Aug 17, 2022
🐳 A most popular sql audit platform for mysql

?? A most popular sql audit platform for mysql

Henry Yee 6.9k Aug 12, 2022
Query redis with SQL

reqlite reqlite makes it possible to query data in Redis with SQL. Queries are executed client-side with SQLite (not on the redis server). This projec

Augmentable 44 Aug 2, 2022
Go library that stores data in Redis with SQL-like schema

Go library that stores data in Redis with SQL-like schema. The goal of this library is we can store data in Redis with table form.

kaharman 2 Mar 14, 2022
Parses a file and associate SQL queries to a map. Useful for separating SQL from code logic

goyesql This package is based on nleof/goyesql but is not compatible with it any more. This package introduces support for arbitrary tag types and cha

null 0 Oct 20, 2021
write APIs using direct SQL queries with no hassle, let's rethink about SQL

SQLer SQL-er is a tiny portable server enables you to write APIs using SQL query to be executed when anyone hits it, also it enables you to define val

Mohammed Al Ashaal 2k Aug 10, 2022
Go-sql-reader - Go utility to read the externalised sql with predefined tags

go-sql-reader go utility to read the externalised sql with predefined tags Usage

null 0 Jan 25, 2022
null 3 Mar 7, 2022
Dugopg - PostgreSQL tool For Golang

⚡️ DuGoPG Installation go get -u github.com/durudex/dugopg Example import (

Durudex 5 May 9, 2022
Experimental implementation of a SQLite backend for go-mysql-server

go-mysql-sqlite-server This is an experimental implementation of a SQLite backend for go-mysql-server from DoltHub. The go-mysql-server is a "frontend

MergeStat 6 Jul 11, 2022