Distributed, lock-free, self-hosted health checks and status pages

Overview

Checkup

GoDoc Sourcegraph

Checkup is distributed, lock-free, self-hosted health checks and status pages, written in Go.

It features an elegant, minimalistic CLI and an idiomatic Go library. They are completely interoperable and their configuration is beautifully symmetric.

Checkup was created by Matt Holt, author of the Caddy web server. It is maintained and sponsored by Sourcegraph. If you'd like to dive into the source, you can start here.

This tool is a work-in-progress. Please use liberally (with discretion) and report any bugs!

Recent changes

Due to recent development, some breaking changes have been introduced:

  • providers: the json config field provider was renamed to type for consistency,
  • notifiers: the json config field name was renamed to type for consistency,
  • sql: by default the sqlite storage engine is disabled (needs build with -tags sql to enable),
  • sql: storage engine is deprecated in favor of new storage engines postgres, mysql, sqlite3
  • mailgun: the to parameter now takes a list of e-mail addresses (was a single recipient)
  • LOGGING IS NOT SWALLOWED ANYMORE, DON'T PARSE checkup OUTPUT IN SCRIPTS
  • default for status page config has been set to local source (use with checkup serve)

If you want to build the latest version, it's best to run:

  • make build - builds checkup with mysql and postgresql support,
  • make build-sqlite3 - builds checkup with additional sqlite3 support

The resulting binary will be placed into builds/checkup.

Intro

Checkup can be customized to check up on any of your sites or services at any time, from any infrastructure, using any storage provider of your choice (assuming an integration exists for your storage provider). The status page can be customized to your liking since you can do your checks however you want. The status page is also mobile-responsive.

Checkup currently supports these checkers:

  • HTTP
  • TCP (+TLS)
  • DNS
  • TLS

Checkup implements these storage providers:

  • Amazon S3
  • Local file system
  • GitHub
  • MySQL
  • PostgreSQL
  • SQLite3
  • Azure Application Insights

Currently the status page does not support SQL or Azure Application Insights storage back-ends.

Checkup can even send notifications through your service of choice (if an integration exists).

How it Works

There are 3 components:

  1. Storage. You set up storage space for the results of the checks.
  2. Checks. You run checks on whatever endpoints you have as often as you want.
  3. Status Page. You (or GitHub) host the status page.

Quick Start

Download Checkup for your platform and put it in your PATH, or install from source:

$ go get -u github.com/sourcegraph/checkup/cmd/checkup

You'll need Go 1.8 or newer. Verify it's installed properly:

$ checkup --help

Then follow these instructions to get started quickly with Checkup.

Create your Checkup config

You can configure Checkup entirely with a simple JSON document. You should configure storage and at least one checker. Here's the basic outline:

{
    "checkers": [
        // checker configurations go here
    ],

    "storage": {
        // storage configuration goes here
    },

    "notifiers": [
        // notifier configuration goes here
    ]
}

Save the checkup configuration file as checkup.json in your working directory.

We will show JSON samples below, to get you started. But please refer to the godoc for a comprehensive description of each type of checker, storage, and notifier you can configure!

Here are the configuration structures you can use, which are explained fully in the godoc. Only the required fields are shown, so consult the godoc for more.

HTTP Checker

godoc: check/http

{
    "type": "http",
    "endpoint_name": "Example HTTP",
    "endpoint_url": "http://www.example.com"
    // for more fields, see the godoc
}

TCP Checker

godoc: check/tcp

{
    "type": "tcp",
    "endpoint_name": "Example TCP",
    "endpoint_url": "example.com:80"
}

DNS Checkers

godoc: check/dns

{
    "type": "dns",
    "endpoint_name": "Example of endpoint_url looking up host.example.com",
    "endpoint_url": "ns.example.com:53",
    "hostname_fqdn": "host.example.com"
}

TLS Checkers

godoc: check/tls

{
    "type": "tls",
    "endpoint_name": "Example TLS Protocol Check",
    "endpoint_url": "www.example.com:443"
}

Exec Checkers

godoc: check/exec

The exec checker can run any command, and expects an zero-value exit code on success. Non-zero exit codes are considered errors. You can configure the check with "raise":"warning" if you want to consider a failing service as DEGRADED. Additional options available on godoc link above.

{
    "type": "exec",
    "name": "Example Exec Check",
    "command": "testdata/exec.sh"
}

Amazon S3 Storage

godoc: S3

{
    "type": "s3",
    "access_key_id": "<yours>",
    "secret_access_key": "<yours>",
    "bucket": "<yours>",
    "region": "us-east-1"
}

To serve files for your status page from S3, copy statuspage/config_s3.js over statuspage/config.js, and fill out the required public, read-only credentials.

File System Storage

godoc: FS

{
    "type": "fs",
    "dir": "/path/to/your/check_files"
}

GitHub Storage

godoc: GitHub

{
    "type": "github",
    "access_token": "some_api_access_token_with_repo_scope",
    "repository_owner": "owner",
    "repository_name": "repo",
    "committer_name": "Commiter Name",
    "committer_email": "[email protected]",
    "branch": "gh-pages",
    "dir": "updates"
}

Where "dir" is a subdirectory within the repo to push all the check files. Setup instructions:

  1. Create a repository,
  2. Copy the contents of statuspage/ from this repo to the root of your new repo,
  3. Update the URL in config.js to https://your-username.github.com/dir/,
  4. Create updates/.gitkeep,
  5. Enable GitHub Pages in your settings for your desired branch.

MySQL Storage

godoc: storage/mysql

A MySQL database can be configured as a storage backend.

Example configuration:

{
    "type": "mysql",
    "create": true,
    "dsn": "checkup:[email protected](mysql-checkup-db:3306)/checkup"
}

When create is set to true, checkup will issue CREATE TABLE statements required for storage.

SQLite3 Storage (requires CGO to build, not available as a default)

godoc: storage/sqlite3

A SQLite3 database can be configured as a storage backend.

Example configuration:

{
    "type": "sqlite3",
    "create": true,
    "dsn": "/path/to/your/sqlite.db"
}

When create is set to true, checkup will issue CREATE TABLE statements required for storage.

PostgreSQL Storage

godoc: storage/postgres

A PostgreSQL database can be configured as a storage backend.

Example configuration:

{
    "type": "postgres",
    "dsn": "host=postgres-checkup-db user=checkup password=checkup dbname=checkup sslmode=disable"
}

When create is set to true, checkup will issue CREATE TABLE statements required for storage.

Azure Application Insights Storage

godoc: appinsights

Azure Application Insights can be used as a storage backend, enabling Checkup to be used as a source of custom availability tests and metrics. An example use case is documented here.

A sample storage configuration with retries enabled:

{
  "type": "appinsights",
  "test_location": "data center 1",
  "instrumentation_key": "11111111-1111-1111-1111-111111111111",
  "retry_interval": 1,
  "max_retries": 3,
  "tags": {
    "service": "front end",
    "product": "main web app"
  }
} 

The following keys are optional:

  • test_location (default is Checkup Monitor)
  • retry_interval (default is 0)
  • max_retries (default is 0)
  • timeout (defaults to 2 seconds if omitted or set to 0)
  • tags

If retries are disabled, the plugin will wait up to timeout seconds to submit telemetry before closing.

When check results are sent to Application Insights, the following values are included in the logged telemetry:

  • success is set to 1 if the check passes, 0 otherwise
  • message is set to Up, Down, or Degraded
  • duration is set to the average of all check result round-trip times and is displayed as a string in milliseconds
  • customMeasurements is set to a JSON object including the number of the check as a string and the round-trip time of the check in nanoseconds
  • If the check included a threshold_rtt setting, it will be added to the customDimensions JSON object as key ThresholdRTT with a time duration string value (ie: 200ms)
  • If any tags were included in the storage configuation, they will be added to the customDimensions JSON object

Currently the status page does not support Application Insights storage.

Slack notifier

Enable notifications in Slack with this Notifier configuration:

{
    "type": "slack",
    "username": "username",
    "channel": "#channel-name",
    "webhook": "webhook-url"
}

Follow these instructions to create a webhook.

Mail notifier

Enable E-mail notifications with this Notifier configuration:

{
    "type": "mail",
    "from": "[email protected]",
    "to": [ "[email protected]", "[email protected]" ],
    "subject": "Custom subject line",
    "smtp": {
        "server": "smtp.example.com",
        "port": 25,
        "username": "username",
        "password": "password"
    }
}

The settings for subject, smtp.port (default to 25), smtp.username and smtp.password are optional.

Mailgun notifier

Enable notifications using Mailgun with this Notifier configuration:

{
    "type": "mailgun",
    "from": "[email protected]",
    "to": [ "[email protected]", "[email protected]" ],
    "subject": "Custom subject line"
    "apikey": "mailgun-api-key",
    "domain": "mailgun-domain",
}

Pushover notifier

Enable notifications using Pushover with this Notifier configuration:

{
    "type": "pushover",
    "token": "API_TOKEN",
    "recipient": "USER_KEY"
    "subject": "Custom subject line"
}

Setting up storage on S3

The easiest way to do this is to give an IAM user these two privileges (keep the credentials secret):

  • arn:aws:iam::aws:policy/IAMFullAccess
  • arn:aws:iam::aws:policy/AmazonS3FullAccess

Implicit Provisioning

If you give these permissions to the same user as with the credentials in your JSON config above, then you can simply run:

$ checkup provision

and checkup will read the config file and provision S3 for you. If the user is different, you may want to use explicit provisioning instead.

This command creates a new IAM user with read-only permission to S3 and also creates a new bucket just for your check files. The credentials of the new user are printed to your screen. Make note of the Public Access Key ID and Public Access Key! You won't be able to see them again.

IMPORTANT SECURITY NOTE: This new IAM user will have read-only permission to all S3 buckets in your AWS account, and its credentials will be visible to any visitor to your status page. If you do not want to grant visitors to your status page read access to all your S3 buckets, you need to modify this IAM user's permissions to scope its access to the Checkup bucket. If in doubt, restrict access to your status page to trusted visitors. It is recommended that you do NOT include ANY sensitive credentials on the machine running Checkup.

Explicit Provisioning

If you do not prefer implicit provisioning using your checkup.json file, do this instead. Export the information to environment variables and run the provisioning command:

$ export AWS_ACCESS_KEY_ID=...
$ export AWS_SECRET_ACCESS_KEY=...
$ export AWS_BUCKET_NAME=...
$ checkup provision s3

Manual Provisioning

If you'd rather do this manually, see the instructions on the wiki but keeping in mind the region must be US Standard.

Checkup status page

Checkup now has a local HTTP server that supports serving checks stored in:

  • FS (local filesystem storage),
  • MySQL
  • PostgreSQL
  • SQLite3 (not enabled by default)

You can run checkup serve from the folder which contains checkup.json and the statuspage/ folder.

Setting up the status page for GitHub

You will need to edit `

Setting up the status page for S3

In statuspage/js, use the contents of config_s3.js to fill out config.js, which is used by the status page. This is where you specify how to access the S3 storage bucket you just provisioned for check files.

As you perform checks, the status page will update every so often with the latest results. Only checks that are stored will appear on the status page.

Performing checks

You can run checks many different ways: cron, AWS Lambda, or a time.Ticker in your own Go program, to name a few. Checks should be run on a regular basis. How often you run checks depends on your requirements and how much time you render on the status page.

For example, if you run checks every 10 minutes, showing the last 24 hours on the status page will require 144 check files to be downloaded on each page load. You can distribute your checks to help avoid localized network problems, but this multiplies the number of files by the number of nodes you run checks on, so keep that in mind.

Performing checks with the checkup command is very easy.

Just cd to the folder with your checkup.json from earlier, and checkup will automatically use it:

$ checkup

The vanilla checkup command runs a single check and prints the results to your screen, but does not save them to storage for your status page.

To store the results instead, use --store:

$ checkup --store

If you want Checkup to loop forever and perform checks and store them on a regular interval, use this:

$ checkup every 10m

And replace the duration with your own preference. In addition to the regular time.ParseDuration() formats, you can use shortcuts like second, minute, hour, day, or week.

You can also get some help using the -h option for any command or subcommand.

Posting status messages

Site reliability engineers should post messages when there are incidents or other news relevant for a status page. This is also very easy:

$ checkup message --about=Example "Oops. We're trying to fix the problem. Stay tuned."

This stores a check file with your message attached to the result for a check named "Example" which you configured in checkup.json earlier.

Doing all that, but with Go

Checkup is as easy to use in a Go program as it is on the command line.

Using Go to set up storage on S3

First, create an IAM user with credentials as described in the section above.

Then go get github.com/sourcegraph/checkup and import it.

Then replace ACCESS_KEY_ID and SECRET_ACCESS_KEY below with the actual values for that user. Keep those secret. You'll also replace BUCKET_NAME with the unique bucket name to store your check files:

storage := checkup.S3{
	AccessKeyID:     "ACCESS_KEY_ID",
	SecretAccessKey: "SECRET_ACCESS_KEY",
	Bucket:          "BUCKET_NAME",
}
info, err := storage.Provision()
if err != nil {
	log.Fatal(err)
}
fmt.Println(info) // don't lose this output!

This method creates a new IAM user with read-only permission to S3 and also creates a new bucket just for your check files. The credentials of the new user are printed to your screen. Make note of the PublicAccessKeyID and PublicAccessKey! You won't be able to see them again.

Using Go to perform checks

First, go get github.com/sourcegraph/checkup and import it. Then configure it:

c := checkup.Checkup{
	Checkers: []checkup.Checker{
		checkup.HTTPChecker{Name: "Example (HTTP)", URL: "http://www.example.com", Attempts: 5},
		checkup.HTTPChecker{Name: "Example (HTTPS)", URL: "https://www.example.com", Attempts: 5},
		checkup.TCPChecker{Name:  "Example (TCP)", URL:  "www.example.com:80", Attempts: 5},
		checkup.TCPChecker{Name:  "Example (TCP SSL)", URL:  "www.example.com:443", Attempts: 5, TLSEnabled: true},
		checkup.TCPChecker{Name:  "Example (TCP SSL, self-signed certificate)", URL:  "www.example.com:443", Attempts: 5, TLSEnabled: true, TLSCAFile: "testdata/ca.pem"},
		checkup.TCPChecker{Name:  "Example (TCP SSL, validation disabled)", URL:  "www.example.com:8443", Attempts: 5, TLSEnabled: true, TLSSkipVerify: true},
		checkup.DNSChecker{Name:  "Example DNS test of ns.example.com:53 looking up host.example.com", URL:  "ns.example.com:53", Host: "host.example.com", Attempts: 5},
	},
	Storage: checkup.S3{
		AccessKeyID:     "<yours>",
		SecretAccessKey: "<yours>",
		Bucket:          "<yours>",
		Region:          "us-east-1",
		CheckExpiry:     24 * time.Hour * 7,
	},
}

This sample checks 2 endpoints (HTTP and HTTPS). Each check consists of 5 attempts so as to smooth out the final results a bit. We will store results on S3. Notice the CheckExpiry value. The checkup.S3 type is also checkup.Maintainer type, which means it can maintain itself and purge any status checks older than CheckExpiry. We chose 7 days.

Then, to run checks every 10 minutes:

c.CheckAndStoreEvery(10 * time.Minute)
select {}

CheckAndStoreEvery() returns a time.Ticker that you can stop, but in this case we just want it to run forever, so we block forever using an empty select.

Using Go to post status messages

Simply perform a check, add the message to the corresponding result, and then store it:

results, err := c.Check()
if err != nil {
	// handle err
}

results[0].Message = "We're investigating connectivity issues."

err = c.Storage.Store(results)
if err != nil {
	// handle err
}

Of course, real status messages should be as descriptive as possible. You can use HTML in them.

Other topics

Getting notified when there are problems

Uh oh, having some fires? 🔥 You can create a type that implements checkup.Notifier. Checkup will invoke Notify() after every check, where you can evaluate the results and decide if and how you want to send a notification or trigger some event.

Other kinds of checks or storage providers

You can implement your own Checker and Storage types. If it's general enough, feel free to submit a pull request so others can use it too!

Building Locally

Requires Go v1.14 or newer. Building with the latest Go version is encouraged.

git clone [email protected]:sourcegraph/checkup.git
cd checkup
make

Building the SQLite3 enabled version is done with make build-sqlite3. PostgreSQL and MySQL are enabled by default.

Building a Docker image

If you would like to run checkup in a docker container, building it is done by running make docker. It will build the version without sql support. An SQL supported docker image is currently not provided, but there's a plan to do that in the future.

Comments
  • Documentation on how to use local storage

    Documentation on how to use local storage

    Gentlemen, I tried out this one https://github.com/sourcegraph/checkup/pull/13

    Go as far as generating the json files after adding this to the config "storage": { "provider": "fs",

    Left the config.js with the template values, but I suppose I need to change this too ?

    The html statuspage fails with NS_ERROR_DOM_BAD_URI when trying to read the json files - how/where can I config these paths to be legit ?

    Thanks guys

    opened by theguythatrepeatedit 11
  • make build-sql throwing error as cannot use -o with multiple packages

    make build-sql throwing error as cannot use -o with multiple packages

    Here is the command line output when run make build-sql

    make build-sql
    go fmt ./...
    go mod tidy
    go build -o builds/ -tags sql ./cmd/...
    go build: cannot use -o with multiple packages
    make: *** [build-sql] Error 1
    

    Thanks in advance.

    opened by reidharry 9
  • Logging is disabled

    Logging is disabled

    log.SetOutput(ioutil.Discard) unsets the log package output unconditionally for all users of the checkup package. I don't see the point of this and of the logger.go file.

    If it is only for the command line client (and even there, it hides error messages so it isn't very useful), I guess it should be moved into the cmd directory...

    opened by McKael 9
  • Add GitHub storage provider

    Add GitHub storage provider

    Closes #53.

    See it in action. (repo).

    This is very rough and undocumented code. It makes a number of GitHub REST API requests to push JSON files; it's modeled off the FS provider. The testing code is similarly modeled after the FS provider's texts. I basically implement the 4 endpoints I use in a test server and keep a "git repo" and index in memory. 😆

    I expect this code to evolve & drastically improve with your input.

    Configuration is as follows (all fields presently required):

    {
      "storage": {
        "provider": "github",
        "access_token": "some_api_access_token_with_repo_scope",
        "repository_owner": "owner",
        "repository_name": "repo",
        "committer_name": "User Name",
        "committer_email": "[email protected]",
        "branch": "gh-pages",
        "dir": "updates"
      }
    }
    

    The "dir" field is like the FS storage layer: it pushes all .json files to a specific subdirectory in your Git.

    Setup:

    1. Create a repository.
    2. Copy the contents of statuspage from this repo to the root of your new repo.
    3. Change index.html to pull in js/fs.js instead of js/s3.js.
    4. Create updates/.gitkeep. This code currently does not create the updates/ directory.
    5. Enable GitHub Pages in your settings for your desired branch.
    6. Run checkup --store.

    Please let me know if this is something you'd find useful.

    opened by parkr 8
  • misleading config.js vs checkup.json

    misleading config.js vs checkup.json

    Do we need both files? Does config.js contain the checkup.json content?

    Some parts like https://github.com/sourcegraph/checkup#file-system-storage mention it but https://github.com/sourcegraph/checkup#github-storage does not.

    opened by DanielRuf 7
  • Headers and auth for HTTP checks

    Headers and auth for HTTP checks

    From: https://news.ycombinator.com/reply?id=12241719&goto=item%3Fid%3D12240380%2312241719

    Can we use custom headers/auth when setting up the APIs for health checkups?

    This shouldn't be too hard to implement; just some more fields on the struct.

    enhancement 
    opened by mholt 7
  • Project status?

    Project status?

    It looks like activity here has somewhat died off. Is this project still active? Are you looking for comaintainers?

    I'm looking at deploying this for personal stuff, but I'd rather not base my monitoring on a project that's no longer maintained.

    opened by cweagans 6
  • How to implement a Notifier

    How to implement a Notifier

    I have absolutely no knowledge in golang. But I have quite a lot knowledge in python, node, php and some in java, c and c++.

    I am trying to extend checkup to be able to send emails, but however I do it, I only get "unknown notifier type".

    My steps were:

    • create a case in MarshalJSON and in UnmarshalJSON to be able to have a Notifier Type called "printLogger" (later, I'd notify a rest api or email server)
    • add a new file printloggernotifier.go that has the package on top, some imports and then the type PrintLoggerNotifier struct {... that only has one property, a prefix that will be prepended to the log.
    • i also altered the both lines in marshal and unmarhal that tells unknown Notifier type to include marshal and unmarshal (to know where excactly the problem happened).
    • I deleted the binary and rebuilt it, but sadly the output is still the same.
    opened by kellertobias 6
  • SQL storage support

    SQL storage support

    Would you consider merging SQL storage?

    This patchs adds a SQL storage implementation. Only the sqlite3 is available, but any backend supported by the sqlx package should be easily added.

    opened by McKael 6
  • Limit number of days or entries checkup stores

    Limit number of days or entries checkup stores

    Overtime, if you're using a small interval check, the number of json files could explode. Would be nice to keep a limited number of entries if we wanted to.

    Maybe this is a job for something else, but a config option built in would be nice.

    opened by chrisportela 6
  • Wishlist: Sample configuration file that explains all possibilities of checkup

    Wishlist: Sample configuration file that explains all possibilities of checkup

    For example, checkup has must_contain as mentioned in https://tryingtobeawesome.com/checkup/ but that doesn't seem to be documented in the repo. For a quick thing, a sample config.json with all such possibilities will help

    opened by rrjanbiah 5
  • storage/github: add commit_message_suffix as config entry

    storage/github: add commit_message_suffix as config entry

    Adding '[ci skip]' made sense in the original implementation, when CI was not often used to act on a status repo. In the age of GitHub Actions, it would be helpful to be able to control whether this string is appended.

    The current implementation gives the user the choice to append it by setting the "commit_message_suffix": "[ci skip]" field to their configuration, but leaves it off by default.

    opened by parkr 3
  • multiple

    multiple "exec" with same command, serve shows only one

    this seems to be a frontend bug only, as both output fine with ./checkup

    example config:

            {
                "type": "exec",
                "name": "Block apcupsd",
                "command": "./assert_blocked.sh",
                "arguments": ["somehost", "3551"]
            },
            {
                "type": "exec",
                "name": "Block rpcbind",
                "command": "./assert_blocked.sh",
                "arguments": ["somehost", "111"]
            }
    
    opened by khimaros 0
  • Expired TLS check doesn't display expiration date

    Expired TLS check doesn't display expiration date

    While using the tls check, when the certificate is already expired, I expected to see the expired date just as shown in the tls.go check:

    result.Times[i].Error = fmt.Sprintf("certificate expired %s ago", time.Since(leaf.NotAfter))
    

    As far as I see, the problem is that the call to tls.DialWithDialer results in an error if the certificate already expired, so when we reach conclude(), the very first thing we do is check for errors and returning, resulting in never being able to get the expiration date.

    // check errors (down)
    for i := range result.Times {
    	if result.Times[i].Error != "" {
    		result.Down = true
    		return result
    	}
    }
    

    Also, even if I comment the previous validation, the connection used for DialWithDialer will be nil and we are unable to get any expiry date from it.

    I tried using InsecureSkipVerify = true inside the tlsConfig but then that breaks some other useful checks, like verifying the root CA.

    I'm new to golang but if anyone knows how to fix this then I can submit a PR for review.

    opened by sergioagm 0
  • Does sourcegraph still use checkup?

    Does sourcegraph still use checkup?

    Given that checkup is sponsored by Sourcegraph the expectation is that they would have it deployed somewhere. In the blog post from 2016 announcing the release of checkup it seems that place was http://checkup.sourcegraph.com, but that page is currently down. Is this because Sourcegraph no longer uses checkup?

    opened by kalebo 1
Releases(v0.2.0)
Owner
Sourcegraph
Code search and navigation for teams (self-hosted, OSS)
Sourcegraph
Gowl is a process management and process monitoring tool at once. An infinite worker pool gives you the ability to control the pool and processes and monitor their status.

Gowl is a process management and process monitoring tool at once. An infinite worker pool gives you the ability to control the pool and processes and monitor their status.

Hamed Yousefi 40 Nov 10, 2022
BRUS - Parses your web server (e.g. nginx) log files and checks with GreyNoise how much noise your website is exposed to.

BRUS bbbbbb rrrrrr u u sssss b b r r u u s bbbbbb rrrrrr u u sssss b b r r u u s bbbbbb r r

dubs3c 1 May 29, 2022
ProfileStatusSyncer - A tool to synchronize user profile status of Github and Netease CloudMusic

ProfileStatusSyncer A tool to synchronize user profile status of GitHub and Nete

lz已经是条咸鱼了 4 Jul 20, 2022
Distributed-Log-Service - Distributed Log Service With Golang

Distributed Log Service This project is essentially a result of my attempt to un

Hamza Yusuff 6 Jun 1, 2022
Distributed Commit Log from Travis Jeffery's Distributed Services book

proglog A distributed commit log. This repository follows the book "Distributed Services with Go" by Travis Jeffrey. The official repository for this

Arindam Das 2 May 23, 2022
alog is a dependency free, zero/minimum memory allocation JSON logger with extensions

Alog (c) 2020-2021 Gon Y Yi. https://gonyyi.com. MIT License Version 1.0.0 Intro Alog was built with a very simple goal in mind: Support Tagging (and

Gon 13 Dec 13, 2021
Self-use log encapsulation for golang

package app import "github.com/restoflife/log" func Init() { log.Ne

Restoflife 1 Dec 29, 2021
Distributed simple and robust release management and monitoring system.

Agente Distributed simple and robust release management and monitoring system. **This project on going work. Road map Core system First worker agent M

StreetByters Community 30 Nov 17, 2022
Nightingale - A Distributed and High-Performance Monitoring System. Prometheus enterprise edition

Introduction ?? A Distributed and High-Performance Monitoring System. Prometheus

taotao 1 Jan 7, 2022
distributed monitoring system

OWL OWL 是由国内领先的第三方数据智能服务商 TalkingData 开源的一款企业级分布式监控告警系统,目前由 Tech Operation Team 持续开发更新维护。 OWL 后台组件全部使用 Go 语言开发,Go 语言是 Google 开发的一种静态强类型、编译型、并发型,并具有垃圾回

null 826 Dec 24, 2022
CNCF Jaeger, a Distributed Tracing Platform

Jaeger - a Distributed Tracing System Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing platform created by Uber Technologies and do

Jaeger - Distributed Tracing Platform 16.9k Jan 3, 2023
A distributed logger micro-service built in go

Distributed Logger A distributed logger micro-service built in go using gRPC for internal communication with other micro-services and JSON for externa

Famous Ketoma 0 Nov 6, 2021
The open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.

The open-source platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics no matter wher

Grafana Labs 53.3k Jan 3, 2023
Simple and configurable Logging in Go, with level, formatters and writers

go-log Logging package similar to log4j for the Golang. Support dynamic log level Support customized formatter TextFormatter JSONFormatter Support mul

Guoqiang Chen 13 Sep 26, 2022
A Go (golang) package providing high-performance asynchronous logging, message filtering by severity and category, and multiple message targets.

ozzo-log Other languages 简体中文 Русский Description ozzo-log is a Go package providing enhanced logging support for Go programs. It has the following fe

Ozzo Framework 121 Dec 17, 2022
Library and program to parse and forward HAProxy logs

haminer Library and program to parse and forward HAProxy logs. Supported forwarder, Influxdb Requirements Go for building from source code git for dow

Shulhan 22 Aug 17, 2022
Cloudinsight Agent is a system tool that monitors system processes and services, and sends information back to your Cloudinsight account.

Cloudinsight Agent 中文版 README Cloudinsight Agent is written in Go for collecting metrics from the system it's running on, or from other services, and

cloudinsight-backup 368 Nov 3, 2022
List files and their creation, modification and access time on android

andfind List files and their access, modification and creation date on a Android

Tek 2 Jan 5, 2022
Every 10 minutes, memory, cpu and storage usage is checked and if they over 80%, sending alert via email.

linux-alert Every 10 minutes, memory, cpu and storage usage is checked and if they over 80%, sending alert via email. Usage Create .env file from .env

Meliksah Cetinkaya 0 Feb 6, 2022