An n:m message multiplexer written in Go


Documentation Status GoDoc Gitter Go Report Card Build Status Coverage Status


What is Gollum?

Gollum is an n:m multiplexer that gathers messages from different sources and broadcasts them to a set of destinations.

Gollum originally started as a tool to MUL-tiplex LOG-files (read it backwards to get the name). It quickly evolved to a one-way router for all kinds of messages, not limited to just logs. Gollum is written in Go to make it scalable and easy to extend without the need to use a scripting language.

Gollum Documentation

How-to-use, installation instructions, getting started guides, and in-depth plugin documentation:


Gollum is tested and packaged to run on FreeBSD, Debian, Ubuntu, Windows and MacOS. Download Gollum and get started now.

Get Gollum Support and Help

gitter: If you can't find your answer in the documentation or have other questions you can reach us on gitter, too.

Reporting Issues: To report an issue with Gollum, please create an Issue here on github:


This project is released under the terms of the Apache 2.0 license.

  • Cloudwatch Logs producer

    Cloudwatch Logs producer


    I'd like to add cloudwatch logs producer to gollum. AWS puts some limitations on this service which are written in this plugin constants. I have couple of problems here.

    • Which type of base producer to use. AWS limits uploads to 5 / second.
    • I'd like to use templates for outgoing messages. Some one might want to send only syslog message itself and ignore stuff like priority and facility. I need to parse messages into a struct to be able to use templates. Any formatter can do that ?

    Can you please give some advice / guide ?

    new feature 
    opened by luqasz 29
  • Feature/grok support

    Feature/grok support

    This PR adds grok support to gollum.

    Grok allows parsing unstructured log-data using named regex patterns. Other log-multiplexers - like logstash or telegraf - also offer grok support. Therefore adding this formatter would ease migration to gollum.

    new feature 
    opened by mre 11
  • Enhancement/switch elasticsearch  package

    Enhancement/switch elasticsearch package

    • Updated dependencies
      • Removed package
      • Added package
    • New Elasticsearch producer implementation ( #87 )
      • Updated config format
      • Added SetGzip feature
      • Removed now unsupported Connections feature
      • Changed to batch producer implementation with bulk requests for messages
    • Fixed shutdown panic from issue #146
    enhancement refactoring breaking change 
    opened by msiebeneicher 11
  • #89 - Logrus evaluation / refactoring

    #89 - Logrus evaluation / refactoring

    This PR addresses by replacing tlog with the Logrus logging framework.

    Not completely ready for merging yet, but the branch is here for comments & evaluation. Startup/shutdown handling, some TODOs and overall testing are still pending.

    opened by ppar 10
  • Reflection based configure

    Reflection based configure

    Implementation of #138. This PR changes the configuration process to a completely reflection based one. Plugins will be configured in a two step, recursive process:

    1. Scan all members for struct tags and configure if given
    2. If type implements "Configure" call it

    This dramatically reduces the complexity of Configure methods down to the point where they can be removed completely.

    enhancement refactoring breaking change 
    opened by arnecls 9
  • Documentation generator & directory reordering

    Documentation generator & directory reordering

    This PR

    • Introduces a almost completely rewritten RST documentation generator for plugins (
    • Reorganizes files under docs/ to clearly distinguish between autogenerated (src/gen) and handwritten (src/) content in order to facilitate an eventual rewrite of the docs
    • Groups plugins' config parameters based on the Go module (BufferedProducer/SimpleProducer/...) each parameter was inherited from. Although this implementation detail isn't directly relevant to the end user, many parameters are thus inherited and repeated across plugins. Presenting them in groups makes it easier for the reader to parse the documentation.
    • Removes a stale Makefile and make.bat for Sphinx that weren't being used
    • Adds a Makefile to manage generating docs
    • Shotguns a gollumdoc:embed_type struct tag to all plugins
    • Includes updated .rst files generated from current plugin code
    • Adjusts some documentation comments in plugins to clear up syntax

    Docs generated from this branch are live at

    enhancement documentation 
    opened by ppar 9
  • multi-line messages

    multi-line messages

    Сan an existing version work with multiline messages? For example, a java stack trace. I'm trying to use format.GrokToJSON:


    "FileIn": Type: "consumer.File" File: /tmp/test_file DefaultOffset: newest OffsetFile: "" ObserveMode: poll PollingDelay: 1000 Streams: "file_test" Modulators: - format.GrokToJSON: Patterns: - ^(?P<timestamp>\d{2}.\d{2}.\d{4}\s+[\d:.+]+)\s+(?:%{LOGLEVEL:level})\s+(?P<trace>(?m).*$)

    "rateLimiter": Type: "router.Broadcast" Stream: "file_test"

    "fileOut": Type: "producer.File" Streams: "file_test" File: /tmp/gollum.log Batch: MaxCount: 128 FlushCount: 64 TimeoutSec: 60 FlushTimeoutSec: 3

    But only the first line is processed.

    opened by abelokonev 8
  • Fix for #151 config validation on startup

    Fix for #151 config validation on startup

    Fix for #151, #172, fixing parts of #154 Configuration error handling and logging was broken in several ways. The following fixes are included:

    • config.Validate is now called again during startup (related to #154)
    • "Streams" is now mandatory for consumers and producers
    • "Stream" is now mandatory for routers
    • fixed missing log messages during shutdown due to a missing purge
    • fixed coloring of log messages during startup due to setting the formatter to late
    • having no consumer/producers leads to shutdown
    • proper use of logrus.WithError wthin main package
    opened by arnecls 8
  • Parallel enqueue

    Parallel enqueue

    Allow consumers to enqueue messages by utilising multiple go routines. This should give a performance boost for consumers with a lot / expensive modulators attached.

    Part of this PR are also:

    • Remove of the "Instances" plugin parameter (see below)
    • Fixes a race condition with stream based metrics

    The "Instances" parameter was removed because it was considered dangerous (e.g. when using multiple kafka consumers without a consumer group) and the only reason to use it becomes obsolete with this change.

    new feature development breaking change performance 
    opened by arnecls 8
  • Changed Docker Images

    Changed Docker Images

    • Moved the Dockerfiles to a dedicated directory and renamed them to use a .docker extensions, this ensures proper syntax highlighting in editors.
    • Added a Dockerfile to build a production-ready image:
      • The base image comes along with some useful scripts.
      • Support for Docker health checks.
      • Support for environment variable replacement upon start-up.
      • Support for required environment variables.
      • Defanged operating system.

    It seems as if you are using Docker Hub to build the images, those builds obviously would need to be adjusted to reflect the path changes and to build the new Debian based image as well.

    enhancement pending-submitter-response 
    opened by Fleshgrinder 7
  • 0.5.0 ERROR Failed to read config error=yaml: unmarshal errors:

    0.5.0 ERROR Failed to read config error=yaml: unmarshal errors:

    The Homebrew test block fails for 0.5.0, but passed for 0.4.5

    Issue description

    The test block should pass.


    I'm trying to get 0.5.0 green in Homebrew's CI and merged. See

    Possible Solution

    Probably we just need to update the test block with some tweak to the config file.

    Steps to Reproduce (for bugs)

    bash-3.2$ cat test.conf 
    - "consumer.Profiler":
        Enable: true
        Runs: 100000
        Batches: 100
        Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
        Message: "%256s"
        Stream: "profile"
    bash-3.2$ /usr/local/Cellar/gollum/0.5.0/bin/gollum -tc /tmp/gollum-test-20171221-61659-1kkdhlq/test.conf                                                                                                  
    [2017-12-21 13:54:13 PST] ERROR Failed to read config error=yaml: unmarshal errors:
      line 1: cannot unmarshal !!seq into map[string]tcontainer.MarshalMap
    1. brew pull
    2. brew install -dvs gollum
    3. brew test -vd gollum

    Your Environment

    Used versions

    • gollum version: 0.5.0
    • go version: 1.9.2
    • Operating System and version: macOS 10.11, 10.12, 10.13


    - "consumer.Profiler":
        Enable: true
        Runs: 100000
        Batches: 100
        Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
        Message: "%256s"
        Stream: "profile"

    The test block runs

      test do
        (testpath/"test.conf").write <<~EOS
          - "consumer.Profiler":
              Enable: true
              Runs: 100000
              Batches: 100
              Characters: "abcdefghijklmnopqrstuvwxyz .,!;:-_"
              Message: "%256s"
              Stream: "profile"
        assert_match "parsed as ok", shell_output("#{bin}/gollum -tc #{testpath}/test.conf")
    opened by ilovezfs 7
  • deps(docker): update base images

    deps(docker): update base images

    Signed-off-by: Rui Chen [email protected]

    The purpose of this pull request

    Update base images for security patches in the base image.


    • [x] make test executed successfully
    • [ ] unit test provided
    • [ ] integration test provided
    • [ ] docs updated
    opened by chenrui333 0
  • Compatibility with future Go versions

    Compatibility with future Go versions

    Just a heads up that Go 1.17 (and the just-released 1.16 without first changing GO111MODULE) won't support builds that aren't in module-aware mode.

    The master build should work but the last stable tag (v0.5.4) does not. I'm not sure what the status of this project is. Is there any plans for a new release?

    opened by Bo98 2
  • GCP stackdriver producer

    GCP stackdriver producer

    The purpose of this pull request

    This adds a producer for google stackdriver logs.

    Config to verify

    Messages passed to stdin have to be formatted as severity: payload, e.g. Warning: {"text":"Hello World"}. JSON parsing is optional, i.e. if you pass the "message" payload to Stackdriver, it will be passed as JSON if it begins with a { or else as text.

      Type: consumer.Console
      Streams: "stdin"
      - format.Hostname:
          ApplyTo: "host"
          Separator: ""
      - format.Grok:
            - "^(?P<severity>[^:]+?): (?P<message>.*?)$"
      - format.JSON:
          Source: "message"
          Target: "json"
      Type: producer.Console
      Streams: "stdin"
      - format.ToJSON: {}
      Type: producer.Stackdriver
      Streams: "stdin"
      ProjectID: "trv-hs-kubernetes-edge"
      Payload: "json" # can also use message
      Severity: "severity"
      DefaultSeverity: "Info"
        "stdin": "application-logs"
        - "host"


    • [x] make test executed successfully
    • [ ] unit test provided
    • [ ] integration test provided
    • [x] docs updated
    new feature 
    opened by arnecls 0
  • 0.6.0(Jun 7, 2021)

    Gollum 0.6.0 contains breaking changes over version 0.5.x. Please read the release notes carefully

    Gollum 0.6.0 dependency management switches from go-dep to go-modules. As of this is recommended to use go 1.11 or later for development. Go 1.10 and 1.9 are still supported. Support for Go 1.8 or older has been dropped.

    No Linux/ARMv6 binary has been included in this build due to a build error in the sarama library.

    New with 0.6.0

    • Added a new flag "-mt" to choose the metrics provider (currently only prometheus).
    • Consumer.File setting "Files" now supports glob patterns.
    • Consumer.Syslog now allows non-standard protocol types (see issue #234)
    • Message metadata can now store arbitrary data
    • When not setting the numbers of CPU, gollum will try to use cgroup limits
    • format.Cast for changing metadata field types
    • format.Override to set static field values

    Breaking changes with 0.6.0

    • Format.SplitPick default delimiter is now ","
    • Multiple formatters have been renamed to support the new metadata model
    • Metrics are now collected using go-metrics. This allows e.g. prometheus output (default). Old-style metrics have been removed and many metrics names have changed.
    • Consumer.File setting "File" has been renamed to "Files"
    • Consumer.File setting "OffsetFile" changed to "OffsetPath" to support multiple offset files per consumer.
    • Consumer.File setting "PollingDelay" has been renamed to "PollingDelayMs".
    • Metadata type has changed from map[string][]byte to tgo.MarshalMap.
    • Deserializing messages written by v0.5.x will lead to metadata of those message to be discarded.
    • Removed support for go 1.8 in order to allow sync.Map
    • The functions Message.ResizePayload and .ExtendPayload have been removed in favor if go's slice internal functions.
    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB)
  • v0.5.4(Apr 17, 2019)

    This is a critical patch release.

    It fixes various problems with producer.Spooling that prevented it from working at all. The issues were most likely introduced during the transition from 0.4.x to 0.5.x.

    This version has been built with go 1.12.3

    Fixed with 0.5.4

    • producer.spooling is now functional again as messages were not written correctly since 0.5.0 (#248).
    • producer.spooling now does not block upon shutdown (#248).
    • metadata is now handled correctly when messages are sent to fallback (#247).
    • producer.socket now sends messages directly to fallback if connect fails.
    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB) MB)
  • v0.5.3(Apr 23, 2018)

    This is a critical patch release.

    It fixes a GC crash caused by the message payload memory handler. If your plugins use core. MessageDataPool.get(size) please replace it with make([]byte, size).

    The buffer causing this was introduced with 0.5.0 but the bug seems to occur only when building with go 1.10. We decided to remove the buffer as it's allocation speed improvements showed to be only minor anyways.

    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB) MB)
  • v0.5.2(Apr 16, 2018)

    This is a patch / minor features release.

    All binaries have been compiled with go 1.10.1.

    New with 0.5.2

    • The version number is now generated via make and git. This will properly identify versions between releases.
    • New producer.AwsCloudwatchLogs. Thanks to @luqasz
    • The makefile has been cleaned up and go meta-linter support has been added

    Fixed with 0.5.2

    • consumer.Kafka now properly commits the consumer offsets to kafka. Thanks to @crewton
    • producer.awsKinesis failed to produce records under certain conditions
    • The consumer.Kafka folderPermissions property is now correctly applied
    • formt.ExtractJSON trimValues property is now correctly applied
    • The gollum binary inside the Dockerfile is built on the same baseimage as deployed
    • Filter will now always filter out the MODIFIED message, not the original. This behavior is more "expected".
    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB) MB)
  • v0.5.1(Jan 24, 2018)

    This is a patch / minor features release.

    All vendor dependencies have been updated to the latest version and binaries have been compiled with go 1.9.3.

    New with 0.5.1

    • format.MetadataCopy has been updated to support free copying between metadata and payload
    • producer.ElasticSearch alles setting the format of timeBasedIndex
    • format.GrokToJSON has new options: RemoveEmptyValues, NamedCapturesOnly and SkipDefaultPatterns
    • using dep for dependencies instead of glide

    Fixed with 0.5.1

    • fixed inversion of -lc always
    • fixed a nil pointer panic with producer.elasticsearch when receiving messages with unassigned streams
    • producer.ElasticSearch settings are now named according to config
    • producer.ElasticSearch dayBasedIndex renamed to timeBasedIndex and it's now working as expected
    • updated dependencies to latest version (brings support for kafka 1.0, fixes user agent parsing for format.processTSV)
    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB) MB)
  • v0.5.0(Dec 21, 2017)

    This is a major release with a lot of breaking changes. The configuration format has changed, so you HAVE to change your configuration files. A guide can be found in the detailed release notes.

    All binaries for this release have been compiled with Go 1.9.2

    New features

    • Filters and Formatters have been merged into one list
    • You can now use a filter or formatter more than once in the same plugin
    • Consumers can now do filtering and formatting, too
    • Messages can now store metadata. Formatters can affect the payload or a metadata field
    • All plugins now have an automatic log scope
    • Message payloads are now backed by a memory pool
    • Messages now store the original message, i.e. a backup of the payload state after consumer processing
    • Gollum now provides per-stream metrics
    • Plugins are now able to implement health checks that can be queried via http
    • There is a new pseudo plugin type "Aggregate" that can be used to share configuration between multiple plugins
    • New base types for producers: Direct, Buffered, Batched
    • Plugin configurations now support nested structures
    • The configuration process has been simplified a lot by adding automatic error handling and struct tags
    • Added a new formatter format.GrokToJSON
    • Added a new formatter format.JSONToInflux10
    • Added a new formatter format.Double
    • Added a new formatter format.MetadataCopy
    • Added a new formatter format.Trim
    • Consumer.File now supports filesystem events
    • Consumers can now define the number of go routines used for formatting/filtering
    • All AWS plugins now support role switching
    • All AWS plugins are now based on the same credentials code


    • The plugin lifecycle has been reimplemented to avoid gollum being stuck waiting for plugins to change state
    • Any errors during the configuration phase will cause gollum to exit
    • Integration test suite added
    • Producer.HTTPRequest port handling fixed
    • The test-config command will now produce more meaningful results
    • Duplicating messages now properly duplicates the whole message and not just the struct
    • Several race conditions have been fixed
    • Producer.ElasticSearch is now based on a more up-to-date library
    • Producer.AwsS3 is now behaving more like producer.File
    • Gollum metrics can now bind to a specific address instead of just a port

    Breaking changes

    • The config format has changed to improve automatic processing
    • A lot of plugins have been renamed to avoid confusion and to better reflect their behavior
    • A lot of plugins parameters have been renamed
    • The instances plugin parameter has been removed
    • Most of gollum's metrics have been renamed
    • Plugin base types have been renamed
    • All message handling function signatures have changed to use pointers
    • All formatters don't daisy chain anymore as they can now be listed in proper order
    • Stream plugins have been renamed to Router plugins
    • Routers are not allowed to modify message content anymore
    • filter.All and format.Forward have been removed as they are not required anymore
    • Producer formatter listss dedicated to format a key or similar constructs have been removed
    • Logging framework switched to logrus
    • The package gollum.shared has been removed in favor of trivago.tgo
    • Fuses have been removed from all plugins
    • The general message sequence number has been removed
    • The term "drop" has been replaced by the term "fallback" to emphasise it's use
    • The _DROPPED_ stream has been removed. Messages are discarded if no fallback is set
    • Formatters can still the stream of a message but cannot trigger routing by themselves
    • Compiling contrib plugins now requires a specific loader.go to be added
    • The docker file on docker hub is now a lot smaller and only contains the gollum binary
    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB) MB)
  • v0.5.0-rc3.1(Aug 16, 2017)

    This is a pre-release of v0.5.0

    All vendor dependencies have been updated to the latest version and binaries have been compiled with go 1.9-beta2.

    The current release notes can be found under

    Changes since 0.5.0-rc3

    • fix format.MetadataCopy to actually copy the payload and not reference it
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0-rc3(Aug 15, 2017)

    This is a pre-release of v0.5.0

    All vendor dependencies have been updated to the latest version and binaries have been compiled with go 1.9-beta2.

    The current release notes can be found under

    Changes since 0.5.0-rc2

    • Windows build works again
    • Fully compatible with 1.9
    • Configs are case sensitive again (#208)
    • Configs errors because of wrong types or parameters now give fix suggestions
    • format.MetadataCopy sub-modulators removed
    • Added a new format.Aggregate
    • Renamed "Aggregate" of the aggregate plugin type to "Plugins"
    • Added a trace method to debug message flow
    • "-t / --trace" renamed to "-pt / --profiletrace", the new "-t / --trace" now refers to message tracing
    • Documentation updated
    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB) MB)
  • v0.5.0-rc2(Aug 10, 2017)

    This is a pre-release of v0.5.0

    All vendor dependencies have been updated to the latest version and binaries have been compiled with go 1.9-beta2.

    The current release notes can be found under

    Changes since 0.5.0-rc1

    • Consumer docs updated (fixing missing fields)
    • Added a new SkipIfEmpty parameter for all formatters
    • Added new Try* functions to core.Metadata
    • Built with go1.9-beta2

    Known issues

    • Windows build currently broken because of tgo
    • Make test does not complete with go1.9 because if a timestamp deserialization mismatch
    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB)
  • v0.5.0-rc1(Aug 8, 2017)

  • v0.4.5(Apr 3, 2017)

    This is a patch / minor features release.

    All vendor dependencies have been updated to the latest version and binaries have been compiled with go 1.8.


    • producer.Kafka will discard messages returned as "too large" to avoid spooling
    • consumer.Http does not truncate messages with WithHeaders:false anymore (thanks @mhils)
    • producer.Websocket now uses gorilla websockets (thanks @glaslos)
    • Dockerfile is now working again
    • It is now possible to (optionally) send nil messages with producer.kafka again
    • Consumer.Kinesis will renew the iterator object when hitting a timeout
    • Consumer.Kinesis now runs with an offset file set that does not exist
    • Consumer.Kinesis offset file is now written less often (after each batch)
    • Consumer.Kafka does now retry with an "oldest" offset after encountering an OutOfRange exception.
    • Fixed a crash when using producer.ElasticSearch with date based indexes (thanks @relud)
    • format.Base64Decode now uses data from previous formatters as intended
    • format.JSON arr and obj will now auto create a key if necessary
    • format.JSON now checks for valid state references upon startup
    • format.JSON now properly encodes strings when using "enc"
    • format.SplitToJSON may now keep JSON payload and is better at escaping string
    • "gollum -tc" will exit with error code 1 upon error
    • "gollum -tc" will now properly display errors during config checking


    • Added producer for writing data to Amazon S3 (thanks @relud)
    • Added authentication support to consumer.Http (thanks @glaslos)
    • Added authentication support to native.KafkaProducer (thanks @relud)
    • Added authentication support to producer.Kafka (thanks @relud)
    • Added authentication support to consumer.Kafka (thanks @relud)
    • Added consumer group support to consumer.Kafka (thanks @relud)
    • Added a native SystemD consumer (thanks @relud)
    • Added a Statsd producer for counting messages (thanks @relud)
    • Added an option to flatten JSON arrays into single values with format.ProcessJSON (thanks @relud)
    • Added filter.Any to allow "or" style combinations of filters (thanks @relud)
    • Added support for unix timestamp parsing to format.ProcessJSON (thanks @relud)
    • Added filter.Sample to allow processing of every n'th message only (thanks @relud)
    • Added format.TemplateJSON to apply golang templates to JSON payloads (thanks @relud)
    • Added named pipe support to consumer.Console
    • Added "pick" option to format.ProcessJSON to get a single value from an arrays
    • Extended "remove" option pf format.ProcessJSON to remove values from arrays
    • Added "geoip" option to format.ProcessJSON to get GeoIP data from an IP
    • Added index configuration options to producer.ElasticSearch

    Getting started

    Building instruction can be found in the readme. Documentation is available at read the docs and godoc.

    Contribution notice

    From this release on v0.5.0 will be developed from the master branch. As the new version will contain a lot of breaking changes, 0.4.x will remain as a branch to allow bugfixes and pull requests.

    Source code(tar.gz)
    Source code(zip) KB) MB) MB) MB) MB) MB)
  • v0.4.4(Aug 16, 2016)

    This is a patch / minor features release.

    All vendor dependencies have been updated to their latest version and all binaries have been compiled with go 1.7.


    • Fixed file offset handling in consumer.Kinesis (thanks @relud)
    • Fixed producer.File RotatePruneAfterHours setting
    • Producer.File symlink switch is now atomic
    • Fixed panic in producer.Redis when Formatter was not set
    • Fixed producer.Spooling being stuck for a long time during shutdown
    • Fixed native.KafkaProducer to map all topics to "default" if no topic mapping was set
    • Fixed a concurrent map write during initialization in native.KafkaProducer
    • Fixed consumer.Kafka OffsetFile setting stopping gollum when the offset file was not present
    • consumer.Kafka will retry to connect to a not (yet) existing topic every PersistTimeoutMs
    • Consumer.Kafka now tries to connect every ServerTimeoutSec if initial connect fails
    • Consumer.Kafka MessageBufferCount default value increased to 8192
    • Producer.Kafka and native.KafkaProducer now discard messages with 0-Byte content
    • Producer.Kafka SendRetries set to 1 by default to circumvent a reconnect issue within sarama
    • Fixed panic in producer.Kafka when shutting down
    • Added manual heartbeat to check a valid broker connection with producer.Kafka
    • Format.Base64Encode now returns the original message if decoding failed
    • socket.producer TCP can be used without ACK
    • Consumer.Syslogd message handling differences between RFC3164 and RFC5424 / RFC6587 fixed


    • New AWS Firehose producer (thanks @relud)
    • New format.ProcessTSV for modifying TSV encoded messages (thanks @relud)
    • Added user agent parsing to format.ProcessJSON (thanks @relud)
    • Added support for unix timestamp parsing to format.ProcessJSON (thanks @relud)
    • Added support for new shard detection to consumer.Kinesis (thanks @relud)
    • Added support for mulitple messages per record to producer.Kinesis and consumer.Kinesis (thanks @relud)
    • Added "remove" directive for format.ProcessJSON
    • Added key Formatter support for producer.Redis
    • Added RateLimited- metrics for filter.Rate
    • Added format.Clear to remove message content (e.g. useful for key formatters)
    • Added "KeyFormatterFirst" for producer.Kafka and native.KafkaProducer
    • Added Version support for producer.Kafka and consumer.Kafka
    • Added ClientID support for consumer.Kafka
    • Added folder creation capatibilites to consumer.File when creating offset files
    • Added gollum log messages metrics
    • Added wildcard topic mapping to producer.Kafka and native.KafkaProducer
    • Added FilterAfterFormat to producer.Kafka and native.KafkaProducer
    • Producer.Spooling now continuously looks for new streams to read
    • Producer.Spooling now reacts on SIGHUP to trigger a respooling
    • Seperated version information to -r (version, go runtime, modules) and -v (just version) command line flag
    • Added trace commandline flag

    Getting started

    Building instruction can be found in the readme. Documentation is available at read the docs and godoc.

    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB) MB) MB)
  • v0.4.3(Apr 29, 2016)

    This is a patch / minor features release.

    This release makes gollum fully compatible with the Go 1.6 runtime. It includes concurrent map access fixes as well as go vendoring support. All builds provided with this version are built with Go 1.6.2.

    In addition to this, several configuration defaults have been adjusted. Please check your configuration files.


    • Fixed several race conditions reported by Go 1.6 and go build -race
    • Fixed the scribe producer to drop unformatted message in case of error
    • Fixed file.consumer rotation to work on regular files, too
    • Fixed file.consumer rotation to reset the offset file after a SIGHUP
    • Dockerfiles updated
    • Producer.Kafka now sends messages directly to avoid sarama performance bottlenecks
    • consumer.Kafka offset file is properly read on startup if DefaultOffset "oldest" or "newest" is
    • Exisiting unix domain socket detection changed to use create instead of stat (better error handling)
    • Kafka and Scribe specific metrics are now updated if there are no messages, too
    • Scribe producer is now reacting better to server connection errors
    • Filters and Formatters are now covered with unittests


    • Support for Go1.5 vendor experiment
    • New producer for librdkafka (not included in standard builds)
    • Metrics added to show memory consumption
    • New kafka metrics added to show "roundtrip" times for messages
    • producer.Benchmark added to get more meaningful core system profiling results
    • New filter filter.Rate added to allow limiting streams to a certain number of messages per second
    • Added key support to consumer.Kafka and producer.Kafka
    • Added an "ordered read" config option to consumer.Kafka (round robin reading)
    • Added a new formater format.ExtractJSON to extract a single value from a JSON object
    • Go version is now printed with gollum -v
    • Scribe producer now queries scribe server status in regular intervals
    • format.Sequence separator character can now be configured
    • format.Runlength separator character can now be configured

    Other changes

    • Renamed producer.Kafka BatchTimeoutSec to BatchTimeoutMs to allow millisecond based values
    • producer.Kafka retry count set to 0
    • producer.Kafka default producer set to RoundRobin
    • producer.Kafka GracePeriodMs default set to 100
    • producer.Kafka MetadataRefreshMs default set to 600000 (10 minutes)
    • producer.Kafka TimeoutMs default set to 10000 (10 seconds)
    • filter.RegExp FilterExpressionNot is evaluated before FilterExpression
    • filter.RegExp FilterExpression is evaluated if FilterExpressionNot passed

    Getting started

    Building instruction can be found in the readme. Documentation is available at read the docs and godoc.

    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB) MB) MB)
  • v0.4.2(Feb 16, 2016)

    Release notes

    This is a patch / minor features release.
    New features in this release include improved JSON processing and AWS Kinesis support. There are a lot of critical bugfixes in the Kafka and spooling pipelines. If you are running on v0.4.1 and are using these an update is strongly recommended.


    • consumer.SysLogD now has more meaningful errormessages
    • consumer.File now properly supports file rotation if the file to read is a symlink
    • Scribe and Kafka metrics are now only updated upon successful send
    • Fixed an out of bounds panic when producer.File was rotating logfiles without an extension
    • Compression of files after rotation by produer.File now works (again)
    • producer.Kafka now only reconnects if all topics report an error
    • producer.Spool now properly respools long messages
    • producer.Spool will not delete a file if a message in it could not be processed
    • producer.Spool will try to automatically respool files after a restart
    • producer.Spool will rotate non-empty files even if no new messages come in
    • producer.Spool will recreate folders when removed during runtime
    • producer.Spool will drop messages if rotation failes (not reroute)
    • Messages that are spooled twice now retain their original stream
    • Better handling of situations where Sarama (Kafka) writes become blocking
    • Plugins now start up as "initializing" not as "dead" preventing dropped messages during startup


    • New formatter format.SplitToJSON to convert CSV data to JSON
    • New formatter format.ProcessJSON to modify JSON data
    • producer.File can now set permissions for any folders created
    • RPM spec file added
    • producer.File can now add zero padding to rotated file numbering
    • producer.File can now prune logfiles by file age
    • producer.Spool can now be rate limited
    • Dev version ( is now part of the metrics
    • New AWS Kinesis producer and consumer

    Getting started

    Building instruction can be found in the readme. Documentation is available at read the docs and godoc.

    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB) MB)
  • v0.4.1(Nov 10, 2015)

    Release notes

    This is a patch / minor features release.
    One major new feature is the circuit breaker pattern. This allows e.g. the socket consumer to close the socket if a producer working on the data from that socket reports a downtime. InfluxDB 0.9 support has also been improved.


    • InfluxDB JSON and line protocol fixed
    • shared.WaitGroup.WaitFor with duration 0 falls back to shared.WaitGroup.Wait
    • proper io.EOF handling for shared.BufferedReader and shared.ByteStream
    • HTTP consumer now responds with 200 instead of 203
    • HTTP consumer properly handles EOF
    • Increased test coverage


    • Introduction of "fuses" (circuit breaker pattern)
    • Support for InfluxDB line protocol
    • New setting to enable/disable InfluxDB time based database names
    • Added HTTPs support for HTTP consumer
    • Added POST data support to HTTPRequest producer

    Getting started

    Building instruction can be found in the readme. Documentation is available at read the docs and godoc.

    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB) MB)
  • v0.4.0(Aug 28, 2015)

    Release notes

    This release includes several reliability fixes that prevent messages from being lost during shutdown. During this process the startup/shutdown mechanics were changed which introduced a lot of breaking changes. Also included are improvements on the file, socket and scribe producers. Write performance may show a minor increase for some producers.

    This release contains breaking changes over version 0.3.x. Custom producers and config files may have to be adjusted.

    All binaries for this release have been compiled with Go 1.5

    Breaking changes

    • shared.RuntimeType renamed to TypeRegistry
    • core.StreamTypes renamed to StreamRegistry
    • ?ControlLoop callback parameters for command handling moved to callback members
    • ?ControlLoop renamed to ?Loop, where ? can be a combination of Control (handling of control messages), Message (handling of messages) or Ticker (handling of regular callbacks)
    • PluginControlStop is now splitted into PluginControlStopConsumer and PluginControlStopProducer to allow plugins that are producer and consumers.
    • Producer.Enqueue now takes care of dropping messages and accepts a timeout overwrite value
    • MessageBatch has been refactored to store messages instead of preformatted strings. This allows dropping messages from a batch.
    • Message.Drop has been removed, Message.Route can be used instead
    • The LoopBack consumer has been removed. Producers can now drop messages to any stream using DropToStream.
    • Stream plugins are now allowed to only bind to one stream
    • Renamed producer.HttpReq to producer.HTTPRequest
    • Renamed format.StreamMod to format.StreamRoute
    • For format.Envelope postfix and prefix configuration keys have been renamed to EnvelopePostifx and EnvelopePrefix
    • Base64Encode and Base64Decode formatter parameters have been renamed to "Base64*"
    • Removed the MessagesPerSecAvg metric
    • Two functions were added to the MessageSource interface to allow blocked/active state query
    • The low resolution timer has been removed


    • Messages stored in channels or MessageBatches can now be flushed properly during shutdown
    • Several producers now properly block when their queue is full (messages could be lost before)
    • Producer control commands now have priority over processing messages
    • Switched to sarama trunk version to get the latest broker connection fixes
    • Fixed various message loss scenarios in file, kafka and http request producer
    • Kafka producer is now reconnecting upon every error (intermediate fix)
    • StreamRoute formatter now properly works when the separator is a space
    • File, Kafka and HTTPRequest plugins don't hava mandatory values anymore
    • Socket consumer can now reopen a dropped connection
    • Socket consumer can now change access rights on unix domain sockets
    • Socket consumer now closes non-udp connections upon any error
    • Socket consumer can now remove an existing UDS file with the same name if necessary
    • Socket consumer now uses proper connection timeouts
    • Socket consumer now sends special acks on error
    • All net.Dial commands were replaced with net.DialTimeout
    • The makfile now correctly includes the config folder
    • Thie file producer now behaves correctly when directory creation fails
    • Spinning loops are now more CPU friendly
    • Plugins can now be addressed by longer paths, too, e.g. ""
    • Log messages that appear during startup are now written to the set log producer, too
    • Fixed a problem where control messages could cause shutdown to get stucked
    • The Kafka producer has been rewritten for better error handling
    • The scribe producer now dynamically modifies the batch size on error
    • The metric server tries to reopen connection every 5 seconds
    • Float metrics are now properly rounded
    • Ticker functions are now restarted after the function is done, preventing double calls
    • No empty messages will be sent during shutdown


    • Added a new stream plugin to route messages to one or more other streams
    • The file producer can now delete old files upon rotate (pruning)
    • The file producer can now overwrite files and set file permissions
    • Added metrics for dropped, discarded, filtered and unroutable messages
    • Streams can now overwrite a producer's ChannelTimeoutMs setting (only for this stream)
    • Producers are now shut down in order based on DropStream dependencies
    • Messages now keep a one-step history of their StreamID
    • Added format.StreamRevert to go back to the last used stream (e.g. after a drop)
    • Added producer.Spooling that temporary stores messages to disk before trying them again (e.g. useful for disconnect scenarios)
    • Added a new formatter to prepend stream names
    • Added a new formatter to serialize messages
    • Added a new formatter to convert collectd to InfluxDB (0.8.x and 0.9.x)
    • It is now possible to add a custom string after the version number
    • Plugins compiled from the contrib folder are now listed in the version string
    • All producers can now define a filter applied before formatting
    • Added unittests to check all bundled producer, consumer, format, filter and stream for interface compatibility
    • Plugins can now be registered and queried by a string based ID via core.PluginRegistry
    • Added producer for InfluxDB data (0.8.x and 0.9.x)
    • Kafka, scribe and elastic search producers now have distinct metrics per topic/category/index
    • Version number is now added to the metrics as in "MMmmpp" (M)ajor (m)inor (p)atch

    Getting started

    Building instruction can be found in the readme. Documentation is available at read the docs and godoc.

    Source code(tar.gz)
    Source code(zip) MB) MB) MB) MB) MB)
  • v0.3.2(Jun 22, 2015)

Multiplexer over TCP. Useful if target server only allows you to create limited tcp connections concurrently.

tcp-multiplexer Use it in front of target server and let your client programs connect it, if target server only allows you to create limited tcp conne

许嘉华 3 May 27, 2021
Message relay written in golang for PostgreSQL and Apache Kafka

Message Relay Message relay written in golang for PostgreSQL and Apache Kafka Requirements Docker and Docker Compose Local installation and using dock

null 0 Dec 19, 2021
Use Consul to do service discovery, use gRPC +kafka to do message produce and consume. Use redis to store result.

目录 gRPC/consul/kafka简介 gRPC+kafka的Demo gRPC+kafka整体示意图 限流器 基于redis计数器生成唯一ID kafka生产消费 kafka生产消费示意图 本文kafka生产消费过程 基于pprof的性能分析Demo 使用pprof统计CPU/HEAP数据的

null 43 Jul 9, 2022
Encode and Decode Message Length Indicators for TCP/IP socket based protocols

SimpleMLI A Message Length Indicator Encoder/Decoder Message Length Indicators (MLI) are commonly used in communications over raw TCP/IP sockets. This

American Express 23 Sep 10, 2022
Go server for STOMP message protocol

Stomper A Go message queue implementing the STOMP protocol. Done Frame parsing TODO Server connection protocol Define interface for queueing Implement

Tyler Darnell 4 Feb 1, 2022
Use pingser to create client and server based on ICMP Protocol to send and receive custom message content.

pingser Use pingser to create client and server based on ICMP Protocol to send and receive custom message content. examples source code: ./examples Us

zznQ 13 Sep 20, 2022
Basic implementation of WhatsApp message counter by participant.

Whatsapp Group Message Counter This is a learning project to get familiar with some topics related to Golang. make sure to have your exported file on

Jose Luis Rodriguez 1 Dec 23, 2021
NotifyTool - A message forwarding service for http to websocket

notifyTool this is a message forwarding service for http to websocket task webso

zhou_chengfei 1 Jan 3, 2022
Compute and print message digest hash values from stdin.

Compute and print message digest hash values from stdin.

null 0 Jan 31, 2022
Totsugen - Generate emphasis message with golang

totsugen Requirements Go 1.17+ Run $ go run main.go -value {value} # ex: go run

Kazuki Kamizuru 0 May 21, 2022
🚀 gnet is a high-performance, lightweight, non-blocking, event-driven networking framework written in pure Go./ gnet 是一个高性能、轻量级、非阻塞的事件驱动 Go 网络框架。

English | ???? 中文 ?? Introduction gnet is an event-driven networking framework that is fast and lightweight. It makes direct epoll and kqueue syscalls

Andy Pan 6.9k Sep 27, 2022
An SNMP library written in GoLang.

gosnmp GoSNMP is an SNMP client library fully written in Go. It provides Get, GetNext, GetBulk, Walk, BulkWalk, Set and Traps. It supports IPv4 and IP

Go SNMP 901 Sep 27, 2022
Jazigo is a tool written in Go for retrieving configuration for multiple devices, similar to rancid, fetchconfig, oxidized, Sweet.

Table of Contents About Jazigo Supported Platforms Features Requirements Quick Start - Short version Quick Start - Detailed version Global Settings Im

null 184 Sep 27, 2022
A simple TUN/TAP library written in native Go.

water water is a native Go library for TUN/TAP interfaces. water is designed to be simple and efficient. It wraps almost only syscalls and uses only G

Song Gao 1.5k Sep 24, 2022
A Socket.IO backend implementation written in Go The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with

Jukka-Pekka Kekkonen 408 Sep 25, 2022
A simple low bandwidth simulator written in go

NETSNAIL 0.8 ABOUT Netsnail is a simple network proxy that simulates low bandwidth. RUNNING Usage of netsnail: -d=0: the delay on data transfe

Per Arneng 23 May 19, 2021
A Windows named pipe implementation written in pure Go.

npipe Package npipe provides a pure Go wrapper around Windows named pipes. Windows named pipe documentation:

Nate Finch 250 Aug 3, 2022
A TCP throughput measuring tool written in Go language

tcpmeter - a tool for measuring TCP upload and download speeds and RTT latency. Build go build Run start the server on the remote machine: tcpmeter -s

Skip Tavakkolian 39 Apr 17, 2022
Multi-threaded socks proxy checker written in Go!

Soxy - a very fast tool for checking open SOCKS proxies in Golang I was looking for some open socks proxies, and so I needed to test them - but really

pry0cc 45 Sep 6, 2022