lumberjack is a log rolling package for Go


lumberjack GoDoc Build Status Build status Coverage Status

Lumberjack is a Go package for writing logs to rolling files.

Package lumberjack provides a rolling logger.

Note that this is v2.0 of lumberjack, and should be imported using thusly:

import ""

The package name remains simply lumberjack, and the code resides at under the v2.0 branch.

Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written.

Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package.

Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior.


To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.


    Filename:   "/var/log/myapp/foo.log",
    MaxSize:    500, // megabytes
    MaxBackups: 3,
    MaxAge:     28, //days
    Compress:   true, // disabled by default

type Logger

type Logger struct {
    // Filename is the file to write logs to.  Backup log files will be retained
    // in the same directory.  It uses <processname>-lumberjack.log in
    // os.TempDir() if empty.
    Filename string `json:"filename" yaml:"filename"`

    // MaxSize is the maximum size in megabytes of the log file before it gets
    // rotated. It defaults to 100 megabytes.
    MaxSize int `json:"maxsize" yaml:"maxsize"`

    // MaxAge is the maximum number of days to retain old log files based on the
    // timestamp encoded in their filename.  Note that a day is defined as 24
    // hours and may not exactly correspond to calendar days due to daylight
    // savings, leap seconds, etc. The default is not to remove old log files
    // based on age.
    MaxAge int `json:"maxage" yaml:"maxage"`

    // MaxBackups is the maximum number of old log files to retain.  The default
    // is to retain all old log files (though MaxAge may still cause them to get
    // deleted.)
    MaxBackups int `json:"maxbackups" yaml:"maxbackups"`

    // LocalTime determines if the time used for formatting the timestamps in
    // backup files is the computer's local time.  The default is to use UTC
    // time.
    LocalTime bool `json:"localtime" yaml:"localtime"`

    // Compress determines if the rotated log files should be compressed
    // using gzip. The default is not to perform compression.
    Compress bool `json:"compress" yaml:"compress"`
    // contains filtered or unexported fields

Logger is an io.WriteCloser that writes to the specified filename.

Logger opens or creates the logfile on first Write. If the file exists and is less than MaxSize megabytes, lumberjack will open and append to that file. If the file exists and its size is >= MaxSize megabytes, the file is renamed by putting the current time in a timestamp in the name immediately before the file's extension (or the end of the filename if there's no extension). A new log file is then created using original filename.

Whenever a write would cause the current log file exceed MaxSize megabytes, the current file is closed, renamed, and a new log file created with the original name. Thus, the filename you give Logger is always the "current" log file.

Backups use the log file name given to Logger, in the form name-timestamp.ext where name is the filename without the extension, timestamp is the time at which the log was rotated formatted with the time.Time format of 2006-01-02T15-04-05.000 and the extension is the original extension. For example, if your Logger.Filename is /var/log/foo/server.log, a backup created at 6:30pm on Nov 11 2016 would use the filename /var/log/foo/server-2016-11-04T18-30-00.000.log

Cleaning Up Old Log Files

Whenever a new logfile gets created, old log files may be deleted. The most recent files according to the encoded timestamp will be retained, up to a number equal to MaxBackups (or all of them if MaxBackups is 0). Any files with an encoded timestamp older than MaxAge days are deleted, regardless of MaxBackups. Note that the time encoded in the timestamp is the rotation time, which may differ from the last time that file was written to.

If MaxBackups and MaxAge are both 0, no old log files will be deleted.

func (*Logger) Close

func (l *Logger) Close() error

Close implements io.Closer, and closes the current logfile.

func (*Logger) Rotate

func (l *Logger) Rotate() error

Rotate causes Logger to close the existing log file and immediately create a new one. This is a helper function for applications that want to initiate rotations outside of the normal rotation rules, such as in response to SIGHUP. After rotating, this initiates a cleanup of old log files according to the normal rules.


Example of how to rotate in response to SIGHUP.


l := &lumberjack.Logger{}
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGHUP)

go func() {
    for {

func (*Logger) Write

func (l *Logger) Write(p []byte) (n int, err error)

Write implements io.Writer. If a write would cause the log file to be larger than MaxSize, the file is closed, renamed to include a timestamp of the current time, and a new log file is created using the original log file name. If the length of the write is greater than MaxSize, an error is returned.

Generated by godoc2md

  • Adds gzip compression to backup log files

    Adds gzip compression to backup log files

    This PR adds gzip compression to backup log files via the CompressBackups config option. Tests added and passed. Please let me know if I missed anything. Note: We're using this library in production (including gzipped backups) with max size as 100MB. Compression doesn't have any noticeable performance impact aside from saving a lot of drive space.

    opened by donovansolms 24
  • Add support for log file compression

    Add support for log file compression

    This change adds support for compressing rotated log files.

    Several other clean ups and specifically test improvements are included as separate commits.

    Fixes issue #13

    opened by 4a6f656c 11
  • v3 Work Thread

    v3 Work Thread

    I think Lumberjack needs a v3. This package was written a long time ago, and while it's functional, there were decisions I made that were poor in hindsight. This thread will be a list of what I think needs to be done.

    1. Switch from a struct to a new function to create a new logger. This has several advantages - we can sanity check the log file name, and return an error if you've given us an invalid name. It lets us do some logic before the first write, like setting up the mill goroutine. And it means there's no way to change things after creation that really shouldn't be changed, like changing the maxsize etc.
    2. Switch from a size in megabytes to a size in bytes (many people have asked for this)
    3. Stop using yaml and toml struct tags.... if people want to write a config they can do that in their own code and we don't need to import those packages for no reason if people don't want to use them
    4. We can change the size calculation so that we just sum all the sizes of the current file and backup files, and then it won't matter if they're not all the same size, and then if people want to rotate on a time-based schedule, they can do that without worrying they'll accidentally blow up their disk.
    5. I think we can clean up the code some, so it's a bit easier to reason about.
    opened by natefinch 9
  • RFC: Option to not move rotated files

    RFC: Option to not move rotated files

    I was wondering if an option to not move rotated files would be a patch you'd consider. I have external tooling that moves and compresses the file (thereby avoiding #124), and I'd rather not have errors show up in stdout. (lumberjack.go:223)

    Looks like pretty straightforward work!

    opened by ahshah 7
  • Log Rotator not working when passing file as variable

    Log Rotator not working when passing file as variable

    Hi Team,

    I am passing filename as variable in function but it seems log rotator is not working .Here is the snippet of my code:

    logfile := logdir + "/monitoring.log" log.SetOutput(&lumberjack.Logger{ Filename: logfile, MaxSize: 5, // megabytes MaxBackups: 3, })

    opened by ankurnigam 7
  • Write a file header on each new file created

    Write a file header on each new file created

    I need a file header to be written each time a new file is created. I don't see a way to do this, especially on rotate. Will add a PR with a suggested solution.

    opened by DnlLrssn 6
  • Rotation based on day

    Rotation based on day


    Would you be interessed by adding the possibility to rotate log file based on day please ? If I do not any mistake, rotation is only activated by the size. The idea would be to also activate it by day : rotate everyday for example.

    Thanks a lot.

    opened by fallais 6
  • Strange behavior in truncate

    Strange behavior in truncate

    Well, I have been using it since quite a sometime but I have got new requirement of generating a file containing a few comma separated value. I truncate it every 15 minute using cron, it worked for few days but lately I am getting strange issue of file getting to old size after app starts writing in it after truncating. So if at time of truncate file size was 700MB, it get truncated by cron but when app start writing again in it after few second, it start with 700MB with head -n1 file.log failing to show any output. nor vim works on file, Only tail -f file.log work confirming that its being written at the last, but sure what is the issue with the start/head of the file as it is not showing the output.

    I am using it with Zap Logger

                   writer := zapcore.AddSync(&lumberjack.Logger{
    			Filename: "file.log",
    			MaxSize:  10000,  //in MB
    			Compress: false, //gzip
    			MaxAge:   28,   //days
    		core := zapcore.NewCore(encoder, writer, level)
    		cores = append(cores, core)

    I even tried to truncate file using the app itself by creating a custom url like this and calling it from with cron instead of bash command.

          r.GET("/logs/truncate", func(ctx *fasthttp.RequestCtx) {
    		file, err := os.OpenFile("file.log", os.O_CREATE|os.O_WRONLY, 0666)
    		if err == nil {
    			_ = file.Truncate(0)
    			_, _ = fmt.Fprint(ctx, "success")
    		} else {
    			_, _ = fmt.Fprint(ctx, "error")

    But this also doesn't worked, file size kept on increasing sometime. I have this code on 3 server and this issue has been happening randomly on 2 server. Though it has been mentioned in #25 and file itself about multi process write will result in improper behavior still What you think could be the issue here? Any workaround...? Also is there way to rotate the file based on time instead of size?

    opened by thevirus20 5
  • Log file rotation is failing with exception

    Log file rotation is failing with exception

    We are using lumberjack Version 2 Log file rotation is failing with exception slice out of bound exception panic: runtime error: slice bounds out of range

    Stack trace of exception.

    goroutine 1 [running]:, 0x0, 0x0) D:/ADMWorkspace/Cloud Workspace/SDL_operations/src/Godeps/_workspace/src/ +0x692, 0x0, 0x0) D:/ADMWorkspace/Cloud Workspace/SDL_operations/src/Godeps/_workspace/src/ +0xbc, 0xc082036d00, 0x7a, 0xca, 0x0, 0x0, 0x0) D:/ADMWorkspace/Cloud Workspace/SDL_operations/src/Godeps/_workspace/src/ +0x405 bytes.(_Buffer).WriteTo(0xc082030460, 0xc94520, 0xc0820122a0, 0x0, 0x0, 0x0) c:/go/src/bytes/buffer.go:206 +0xcf io.copyBuffer(0xc94520, 0xc0820122a0, 0xc945a8, 0xc082030460, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)

    Please help us to resolve this issue

    opened by ShanthkumarS079 5
  • panic: runtime error: slice bounds out of range in lumberjack.go:269

    panic: runtime error: slice bounds out of range in lumberjack.go:269


    After about 20-ish hours of working my application crashes with the following panic() in lumberjack.

    Could there be some kind of out of order condition that could cause files[] to panic?

    Using version:

                        "ImportPath": "",
                        "Comment": "v1.0-2-ga6f35ba",
                        "Rev": "a6f35bab25c9df007f78aa90c441922062451979"

    panic: runtime error: slice bounds out of range

    goroutine 61 [running]:*Logger).cleanup(0xc208082d20, 0x0, 0x0)
        /obfuscated/Godeps/_workspace/src/ +0x513*Logger).rotate(0xc208082d20, 0x0, 0x0)
        /obfuscated/Godeps/_workspace/src/ +0xb0*Logger).Write(0xc208082d20, 0xc208e45900, 0x1cf, 0x247, 0x0, 0x0, 0x0)
        /obfuscated/Godeps/_workspace/src/ +0x316
    bytes.(*Buffer).WriteTo(0xc20b172e00, 0x7f5f3f4a57e0, 0xc208082d20, 0x0, 0x0, 0x0)
        /usr/src/go/src/bytes/buffer.go:202 +0xda
    io.Copy(0x7f5f3f4a57e0, 0xc208082d20, 0x7f5f3f4a5790, 0xc20b172e00, 0x0, 0x0, 0x0)
        /usr/src/go/src/io/io.go:354 +0xb2*Entry).log(0xc2097d4140, 0x4, 0xc209018030, 0x2c)
        /obfuscated/Godeps/_workspace/src/ +0x4d1*Entry).Info(0xc2097d4140, 0xc217d53ba8, 0x1, 0x1)
        /obfuscated/Godeps/_workspace/src/ +0x7f

    This looks like this area of code for me:

        if l.MaxBackups > 0 {
            deletes = files[l.MaxBackups:]
            files = files[:l.MaxBackups]
    opened by cep21 5
  • lumberjack_test.go:377: exp: 1 (int), got: 2 (int)

    lumberjack_test.go:377: exp: 1 (int), got: 2 (int)

    I'm not sure if this issue is related but my applications are not logging anymore since I updated to go 1.4rc1.

    If it's a bug in Go 1.4, you may want to hurry since "[it's] scheduled to be released on 1 Dec 2014".

    go version go1.4rc1 windows/amd64

    go get
    cd %GOPATH%\src\\natefinch\lumberjack
    d:\Dev\go\gopath\src\\natefinch\lumberjack>go test
    lumberjack_test.go:377: exp: 1 (int), got: 2 (int)
    --- FAIL: TestMaxAge (0.06s)
    exit status 1
    FAIL 0.212s
    opened by bbigras 5
  • readme question

    readme question

    It is not clear from the README whether setting MaxSize to non-zero but maxBackups and maxAge=0 will result in rotation.

    With that configuration will it just delete the old file (that just hit the maxSize) and start with a new empty file?

    Can this be documented in the README please?

    opened by nikhilsaraf 1
  • Multiple enhancements

    Multiple enhancements

    • upgrade to go 1.17
    • add the ability to set unlimited maxSize with -1
    • carrying

    Signed-off-by: Fahed DORGAA [email protected]

    opened by fahedouch 3
  • Make compressed log rotation atomic

    Make compressed log rotation atomic

    If another process is watching for *.gz files then it's possible to begin reading the archive before it has been completely created, resulting in corruption if the other process is copying the archive to another location (for example: archival to s3).

    To resolve this, we can use a different suffix when writing the file so that other programs do not read it while it's being created. Once the archive has been completely created, we atomically rename it to the desired file name with the *.gz extension, ensuring external programs only ever see the finished archive.

    Signed-off-by: Chance Zibolski [email protected]

    opened by chancez 5
  • fix #152

    fix #152

    This pull request adds the possibility to have different active/backup forlder directory, typically useful when running on a hardware with limited number of write operation in the flash memory.

    opened by Sauci 1
  • v2.1(May 31, 2017)

    Thanks to the hard work of the devs on Juju, lumberjack now supports compression of backup files.

    The one very minor change in behavior is that if you call Rotate(), the rotation will occur in the background, where previously it would rotate on that thread. But since the rotation process now includes compression of log files, we didn't want all that time to be spent inside the mutex.

    Source code(tar.gz)
    Source code(zip)
  • v2.0(May 31, 2017)

    This release is the same version that you've always gotten from go get

    I'm making this release mostly to differentiate between it and the new release which will include compression of backups.

    Source code(tar.gz)
    Source code(zip)
  • v1.0(Jun 18, 2014)

    Lumberjack is now production-ready!

    From now on, only backwards-compatible changes will be made in v1.x. If I need to change the API in any non-backwards compatible way, it will be done on a separate branch so that existing clients can continue to use 1.0 without breakage.

    Happy logging!

    Source code(tar.gz)
    Source code(zip)
Nate Finch
Author of gorram, lumberjack, pie, gnorm, mage, and others.
Nate Finch
Rolling writer is an IO util for auto rolling write in go.

RollingWriter RollingWriter is an auto rotate io.Writer implementation. It can works well with logger. Awesome Go popular log helper New Version v2.0

Arthur Lee 239 Aug 2, 2022
Log-server - Implement log server for gwaylib/log/adapter/rmsq

Implement server of Base on Build . cd cmd/web go build Deploy Install supd(Debian sy

null 0 Jan 3, 2022
An golang log lib, supports tracking and level, wrap by standard log lib

Logex An golang log lib, supports tracing and level, wrap by standard log lib How To Get shell go get source code import "

chzyer 39 Apr 15, 2022
Nginx-Log-Analyzer is a lightweight (simplistic) log analyzer for Nginx.

Nginx-Log-Analyzer is a lightweight (simplistic) log analyzer, used to analyze Nginx access logs for myself.

Mao Mao 22 Jul 23, 2022
Distributed-Log-Service - Distributed Log Service With Golang

Distributed Log Service This project is essentially a result of my attempt to un

Hamza Yusuff 6 Jun 1, 2022
Log-analyzer - Log analyzer with golang

Log Analyzer what do we have here? Objective Installation and Running Applicatio

Lawrence Agbani 0 Jan 27, 2022
a golang log lib supports level and multi handlers

go-log a golang log lib supports level and multi handlers Use import "" //log with different level log.Info("hello wo

siddontang 31 Jun 15, 2022
Structured log interface

Structured log interface Package log provides the separation of the logging interface from its implementation and decouples the logger backend from yo 25 Jul 29, 2022
CoLog is a prefix-based leveled execution log for Go

What's CoLog? CoLog is a prefix-based leveled execution log for Go. It's heavily inspired by Logrus and aims to offer similar features by parsing the

null 156 Aug 2, 2022
OpenTelemetry log collection library

opentelemetry-log-collection Status This project was originally developed by observIQ under the name Stanza. It has been contributed to the OpenTeleme

OpenTelemetry - CNCF 88 Jul 11, 2022
A simple web service for storing text log files

logpaste A minimalist web service for uploading and sharing log files. Run locally go run main.go Run in local Docker container The Docker container a

Michael Lynch 234 Aug 9, 2022
exo: a process manager & log viewer for dev

exo: a process manager & log viewer for dev exo- prefix – external; from outside. Features Procfile compatible process manager.

Deref 327 Aug 1, 2022
Write log entries, get X-Ray traces.

logtoxray Write to logs, get X-Ray traces. No distributed tracing instrumenation library required. ?? ?? ?? THIS PROJECT IS A WORK-IN-PROGRESS PROTOTY

JBD 27 Apr 24, 2022
Binalyze logger is an easily customizable wrapper for logrus with log rotation

logger logger is an easily customizable wrapper for logrus with log rotation Usage There is only one function to initialize logger. logger.Init() When

Binalyze 26 Nov 18, 2021
Log-structured virtual disk in Ceph

lsd_ceph Log-structured virtual disk in Ceph 1. Vision and Goals of the Project Implement the basic librbd API to work with the research block device

null 3 Dec 13, 2021
Multi-level logger based on go std log

mlog the mlog is multi-level logger based on go std log. It is: Simple Easy to use NOTHING ELSE package main import ( log "

null 0 May 18, 2022
Simple log parser written in Golang

Simple log parser written in Golang

Matteo Baiguini 0 Oct 31, 2021
Nginx JSON Log Analyze

Nginx-JSON-Log-Analyze Nginx Configuration log_format json_log escape=json '{"time_iso8601":"$time_iso8601",' '"remote

Mao Mao 24 Jul 31, 2022
A Log merging tool for linux.

logmerge A Log merging tool for linux. How to build make build How to run --files or -f will allow you to specify multiple log files (comma-seperated)

Paul Theunis 0 Nov 4, 2021