OpenTelemetry instrumentations for Go

Overview

OpenTelemetry instrumentations for Go

build workflow

Instrumentation Package Metrics Traces
database/sql ✔️ ✔️
GORM ✔️ ✔️
sqlx ✔️ ✔️
logrus ✔️
Zap ✔️

Contributing

To simiplify maintenance, we use a single version and a shared changelog for all instrumentations. The changelog is auto-generated from conventional commits.

If you want to contribute an instrumentation, please make sure to include tests and a runnable example. Use Docker if you must but try to avoid it, for example, you can use SQLite instead of MySQL to test database/sql instrumentation. Use instrum-example instrumentation as a template.

To run all tests:

./scripts/test.sh
Comments
  • Updated otelgorm with an option to exclude query variables

    Updated otelgorm with an option to exclude query variables

    This PR adds an option to otelgorm to include query variables in the db.statement attribute. I noticed that query variables were being included by default, but this may not be suitable for most users as query variables can contain sensitive values and would be a security issue if included in traces. For that reason, I've made including query variables opt-in.

    Before:

    SELECT "x" FROM "y" WHERE email_address = '[email protected]'
    
    UPDATE "z" SET "address"='123 Canwefixit Street' WHERE "user_id" = 'bobthebuilder'
    

    After:

    SELECT "x" FROM "y" WHERE email_address = '?'
    
    UPDATE "z" SET "address"='?' WHERE "user_id" = '?'
    
    opened by bincyber 5
  • otelzap logger ignore WithOptions invocation

    otelzap logger ignore WithOptions invocation

    Example

    l := otelzap.Ctx(ctx).WithOptions(zap.Fields(zap.String("FOO", "BAR")))
    l.Info("Hello")
    

    I expect that there will be FOO:BAR field.

    The field is added to the zap encoder, but only for *zap.Logger. skipCaller is ignored: https://github.com/uptrace/opentelemetry-go-extra/blob/main/otelzap/otelzap.go#L31

    type Logger struct {
    	*zap.Logger
    	skipCaller *zap.Logger
    

    After that code is using skipCaller for log output (https://github.com/uptrace/opentelemetry-go-extra/blob/main/otelzap/otelzap.go#L262).

    IMO someone forgot to add to func (l *Logger) WithOptions(opts ...zap.Option) *Logger

    	clone.skipCaller = l.skipCaller.WithOptions(opts...)
    

    https://github.com/uptrace/opentelemetry-go-extra/blob/main/otelzap/otelzap.go#L63

    bug 
    opened by ezh 5
  • WithTraceIDField(true) Does not write trace_id in the log

    WithTraceIDField(true) Does not write trace_id in the log

    In the Readme .it say otelzap.WithTraceIDField(true) configures the logger to add trace_id field to structured log messages

    but actually it does not.

    func (s SugaredLoggerWithCtx) Debugw(msg string, keysAndValues ...interface{}) {
    	s.s.logKVs(s.ctx, zap.DebugLevel, msg, keysAndValues)
    	s.s.skipCaller.Debugw(msg, keysAndValues...)
    }
    

    I found in logKVs function,it change keysAndValues,and return, but outside,the code does not use the return value ,so that's why there is no trace-id in the log file

    opened by hanyue2020 4
  • Issue with golangci-lint and go 1.18

    Issue with golangci-lint and go 1.18

    Hey, I think something is messed up in the folder structure and module structure.

    I'm including and running the otelsql. golangci-lint fails with

    WARN [runner] Can't run linter goanalysis_metalinter: inspect: failed to load package otelsql: could not load export data: no export data for "github.com/uptrace/opentelemetry-go-extra/otelsql"

    this is the online package in happens in, probably because github.com/uptrace/opentelemetry-go-extra/otelsql is not the actual url of the module. (you get 404 if u go there)

    Cheers

    opened by mrsufgi 4
  • fix: ctx fields not propegating to logger

    fix: ctx fields not propegating to logger

    Hey :) Cool pkg. I've been trying it out with a Fiber + otel + zap setup, works good. execpt from mild inconsistencies in behavoiur (which I fixed in my PR)

    Code:

    log := otelzap.New(zap.L(), otelzap.WithTraceIDField(true), otelzap.WithMinLevel(zap.DebugLevel))
    log.Ctx(c.UserContext()).Info("no trace_id field!")
    log.InfoContext(c.UserContext(), "have trace_id field :)")
    

    Results in

    2021-11-25T08:50:53.737Z        INFO    [email protected]/otelzap.go:259   no trace_id field! 
    2021-11-25T08:50:53.737Z        INFO    [email protected]/otelzap.go:101   have trace_id field :)     {"trace_id": "40ed735823dc565b8b9bf7db2da42b10"}
    

    Btw the tracing in my otel-collector works just fine, it's just the logger :)

    opened by mrsufgi 4
  • [otelzap] Spans not capturing log message data

    [otelzap] Spans not capturing log message data

    I'm trying to use otelzap as middleware for fiber, but I'm not seeing log messages provided with a span context showing up in the span output. Trace IDs also do not appear in the log message, despite the logger option being present. Also perhaps relevant, Fiber is being run with Immutable set to true.

    Here is the middleware function with some additional code comments:

    func (a *ManagementAPI) RequestLogging(c *fiber.Ctx) error {
    	start := time.Now()
    
    	c.Next()
    
            // c.UserContext() is set by an earlier middleware function that creates a "request" span
    	ctx, span := a.config.Tracer(ServiceName).Start(c.UserContext(), "request_log")
    	defer span.End()
    
            // a.logger has type *otelzap.Logger
    	a.logger.Ctx(ctx).Info(
    		fmt.Sprintf("%s", c.Path()),
    		zap.Int("status", c.Response().StatusCode()),
    		zap.String("method", c.Method()),
    		zap.String("path", c.Path()),
    		zap.String("ip", c.IP()),
    		zap.ByteString("user_agent", c.Request().Header.UserAgent()),
    		zap.Int64("latency_ns", time.Now().Sub(start).Nanoseconds()),
    		// zap.String("trace_id", span.SpanContext().TraceID().String()),
    	)
    
    	return nil
    }
    

    The span context coming from c.UserContext() also appears to be valid, as it does link parent and child spans correctly.

    The logger being used in the middleware function is created by this function:

    func L(name string) *otelzap.Logger {
    	return otelzap.New(zap.L().Named(name), otelzap.WithTraceIDField(true))
    }
    

    Before then, the global loggers are set up like so:

    func InitializeGlobalLogger(c LoggingConfig) error {
    	zapConfig, err := zapConfig(c)
    	if err != nil {
    		return fmt.Errorf("problem creating zap configuration: %w\n", err)
    	}
    
    	zapLogger := zap.Must(zapConfig.Build())
    	zapLogger = zapLogger.With(zap.String("id", c.AppID()))
    	defer zapLogger.Sync()
    
    	otelLogger := otelzap.New(zapLogger, otelzap.WithTraceIDField(true))
    	defer otelLogger.Sync()
    
    	zap.ReplaceGlobals(zapLogger)
    	otelzap.ReplaceGlobals(otelLogger)
    
    	return nil
    }
    

    The project is using go 1.19, and these versions may be relevant:

    	go.uber.org/zap v1.23.0
    	github.com/uptrace/opentelemetry-go-extra/otelutil v0.1.17 // indirect
    	github.com/uptrace/opentelemetry-go-extra/otelzap v0.1.17 // indirect
    	go.opentelemetry.io/otel v1.11.2 // indirect
    	go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.11.2 // indirect
    	go.opentelemetry.io/otel/sdk v1.11.2 // indirect
    	go.opentelemetry.io/otel/trace v1.11.2 // indirect
    

    Any help would be greatly appreciated.

    opened by Rheisen 3
  • otelgorm replaces (but does not restore) context from statement

    otelgorm replaces (but does not restore) context from statement

    The otelgorm package utilizes replacement the statement context when the before is called...

    https://github.com/uptrace/opentelemetry-go-extra/blob/fcc6618cc37ccf42377873b4b898db73c5081384/otelgorm/otelgorm.go#L97-L101

    However, the statement context is never restored to its original in the corresponding after() call. Thus, after the execution of the query finishes, if the db.Statement object is reused, then the context of the statement will still be using the context corresponding to the now-ended span.

    The following is a failing test case:

      {
        do: func(ctx context.Context, db *gorm.DB) {
          var count int64
          query := db.WithContext(ctx).Table("generate_series(1, 10)")
          _, _ = query.Select("*").Rows()
          _ = query.Count(&count)
        },
        require: func(t *testing.T, spans []sdktrace.ReadOnlySpan) {
          require.Equal(t, 2, len(spans))
          require.Equal(
            t,
            spans[0].Parent().SpanID().String(),
            spans[1].Parent().SpanID().String(),
          )
        },
      }
    

    In do, you can see two separate queries being run (with Rows() and Count()). To me, the spans generated for these two queries should have the same parent. Instead, otelgorm is making the span of the second query a child of the first. If I added a third execution using the query value, that span would be a child of the second. Ideally, the plugin would restore the statement back to it's original so the statement could be reused.

    The only way I could think of doing this was to store the original context in the newly derived context, and then during the after callback pop it out. But that seemed error prone given that Gorm doesn't enforce the ordering of callbacks, so we'd have to hope another callback isn't trying to do the same thing but with a different ordering.

    opened by markhildreth-gravity 3
  • feat(otelgorm): Ignore DryRun callbacks

    feat(otelgorm): Ignore DryRun callbacks

    This PR:

    Causes the plugin to not create spans when Gorm makes callbacks with sessions marked as DryRun. This is technically a backwards-incompatible change, as users will not see the same spans emitted as before (see below for rationale).

    It also adds in a new WithDryRunSpans() option for otelgorm. When this option is enabled, functionality reverts to how the plugin operated prior (e.g., adding spans for DryRun callbacks).

    Why:

    There are times that Gorm will trigger the Query() callback with DryRun specified. This is often because either the user has asked for a SQL query to be rendered but not run, or in our case because Gorm makes multiple Query() calls to render the various subqueries making up a larger query. This causes spurious spans being created for what isn't even a call to the database. We would rather not see these spans, as we mostly care about the times we ARE making a request to a database.

    Here is an example of one of our traces currently. During the request, we run 8 SQL queries, but there are 20 total spans generated. The group of 12 spans on the bottom don't represent calls to the database, but instead rendering of subqueries.

    image

    Here is what this trace looks like as a result of the change. Now, only the 8 queries that hit the database are shown.

    image

    This change would cause DryRun callbacks to not create spans by default, but instead adds for an option to enable creating spans in these cases. This is technically a backwards-incompatible change, but probably one that most users want.

    opened by markhildreth-gravity 3
  • otelsql does not work with pgx driver

    otelsql does not work with pgx driver

    pgx driver overrides CheckNamedValue to empty, which causes the driver to fail from working.

    The otelsql should call connection NamedValueChecker if there is such available.

    If not, return ErrSkip.

    Here's the fix https://github.com/oshurubura/opentelemetry-go-extra/commit/73d420c07429c6ebd18effe6551844345eece864. I'd love to create PR, but could you please let me know what else is required for a PR to be accepted and merged into the main trunk?

    opened by oshurubura 3
  • Fix #30 Add zapcore.ArrayMarshalerType encoder

    Fix #30 Add zapcore.ArrayMarshalerType encoder

    This MR fixes #30. The commit message is pretty clear. Feel free to update the wording.

    I added 2 test cases: for the simplest case with an array of strings and the complex one with an array of durations. IMO other cases should work without any issues as well.

    opened by ezh 3
  • feat(otelgorm): added an option to not report DB stats metrics

    feat(otelgorm): added an option to not report DB stats metrics

    This PR adds an option to otelgorm to not report DB stats metrics. Some users like myself may not be currently using OpenTelemetry for metrics and wish to be able to disable this functionality. This functionality might also overlap or conflict with the Prometheus plugin for gorm: https://gorm.io/docs/prometheus.html

    opened by bincyber 2
  • otelsql doesn't pass the label attributes when reporting db stats

    otelsql doesn't pass the label attributes when reporting db stats

    I suspect the attributes provided to OpenDB are being dropped inside sqlOpenDB when setting up db.Stats reporting via ReportDBStatsMetrics.

    func sqlOpenDB(connector driver.Connector, instrum *dbInstrum) *sql.DB {
    	db := sql.OpenDB(connector)
    	ReportDBStatsMetrics(db, WithMeterProvider(instrum.meterProvider))
    	return db
    }
    

    should probably look something like this

    func sqlOpenDB(connector driver.Connector, instrum *dbInstrum) *sql.DB {
    	db := sql.OpenDB(connector)
    	ReportDBStatsMetrics(db, WithMeterProvider(instrum.meterProvider), WithAttributes(instrum.attrs))
    	return db
    }
    

    I'm not seeing the db.system label on any of the db.Stats metrics but it is still present as expected on the query.timing metric that comes from the tracing collection.

    My current workaround is to direct register the db.Stats collection with ReportDBStatsMetrics but that doesn't disable the automatic collection started by OpenDB.

    bug 
    opened by scott20315 2
  • feat(otelzap): add dynamic field value resolution

    feat(otelzap): add dynamic field value resolution

    This PR aims to add a possibility to dynamically resolve fields from the provided context and fix #59.

    Our use case is that Datadog requires us to provide both the spanId and the traceId to the logged message in the keys dd.span_id and dd.trace_id. In addition, Datadog expects the trace information to be in another format for traces and logs to be correlated. This change will make it possible for us to configure this transformation in the logger itself instead of providing the transformation in every log message.

    Below is a sample implementation for Datadog with this change that will print the converted trace information: ERROR example/main.go:29 hello from zap {"dd.trace_id": "2730366003796952559", "dd.span_id": "2730366003796952559"}

    func main() {
        l, err := zap.NewDevelopment()
        if err != nil {
            panic(err)
        }
        logger = otelzap.New(l, WithDatadogFields())
        
        ctx, span := tracer.Start(context.Background(), "root")
        defer span.End()
    
        logger.Ctx(ctx).Error("hello from zap")
    }
    
    func WithDatadogFields() otelzap.Option {
    	convertTraceId := func(id string) string {
    		if len(id) < 16 {
    			return ""
    		}
    		if len(id) > 16 {
    			id = id[16:]
    		}
    		intValue, err := strconv.ParseUint(id, 16, 64)
    		if err != nil {
    			return ""
    		}
    		return strconv.FormatUint(intValue, 10)
    	}
    
    	return otelzap.WithDynamicFields(func(ctx context.Context) []zap.Field {
    		fields := make([]zap.Field, 0)
    		span := trace.SpanFromContext(ctx)
    		if span.IsRecording() {
    			fields = append(fields, zap.String("dd.trace_id", convertTraceId(span.SpanContext().TraceID().String())))
    			fields = append(fields, zap.String("dd.span_id", convertTraceId(span.SpanContext().SpanID().String())))
    		}
    		return fields
    	})
    }
    
    opened by goober 1
  • [otelzap] Issues with creating otelzap sugarred logger with options

    [otelzap] Issues with creating otelzap sugarred logger with options

    Trying to create a wrapper pkg for otelzap logger

    package logger
    
    type logger struct {
    	z  *zap.SugaredLogger
    	oz *otelzap.SugaredLogger
    }
    
    var globalLogger *logger
    
    func init() {
    	level := getLoggerLevel(os.Getenv("LOGGER_LEVEL"))
    
    	encoderConfig := zap.NewProductionEncoderConfig()
    	encoderConfig.EncodeTime = zapcore.RFC3339TimeEncoder
    
    	zapLogger, err := zap.Config{
    		Level:            zap.NewAtomicLevelAt(level),
    		Development:      true,
    		Encoding:         "console",
    		EncoderConfig:    encoderConfig,
    		OutputPaths:      []string{"stderr"},
    		ErrorOutputPaths: []string{"stderr"},
    	}.Build(zap.WithCaller(true), zap.AddStacktrace(zapcore.ErrorLevel), zap.AddCallerSkip(1))
    	if err != nil {
    		panic(err)
    	}
    
    	globalLogger = &logger{
    		z:  zapLogger.Sugar(),
    		oz: otelzap.New(zapLogger, otelzap.WithCaller(false), otelzap.WithTraceIDField(true)).Sugar(),
    	}
    }
    
    func DebugwContext(ctx context.Context, msg string, keysAndValues ...interface{}) {
    	globalLogger.oz.DebugwContext(ctx, msg, keysAndValues...)
    }
    

    But when i try to execute DebugwContext which uses otelzap logger i see a caller(even if im passed caller false option and don't see a traceId field)

    logger.DebugwContext(ctx, "i'm traced btw", []interface{}{}...)
    
    output:
    2022-10-14T10:45:54+03:00       debug   logger/logger.go:83     i'm traced btw
    
    opened by GinkT 0
  • Instrumenting gorm SQL logging with otelgorm and otelzap

    Instrumenting gorm SQL logging with otelgorm and otelzap

    Hi. First, thanks for the great work instrumenting these libs!

    I came across this issue while setting up otelgorm and otelzap.

    Problem: when configuring gorm with both otelgorm and an otelzap logger, the otelzap logger will not emit span events or add span attributes (like a trace_id), for SQL logging.

    I.e., configure gorm with:

    	db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{
    		Logger: gormLogger, // a thin wrapper around otelzap to fit gorm's logger interface.
    	})
    
            db.Use(otelgorm.NewPlugin())
            
            db.WithContext(someCtx).Exec("select 1")
    

    I think this happens because: gorm calls otelgorm before calling its configured logger (see here), so:

    1. otelgorm does its job and ends the span
    2. when called to log the statement's SQL, otelzap checks if the span is recording (it's not), skips its job and emits an uninstrumented log.

    I see two ways to fix this:

    • Update gorm to be able to log before running callbacks. I can see why this would be hard.
    • Do not use gorm's Logger config to log SQL. Instead create a gorm.Plugin to log SQL, carefully configuring it to run before otelgorm.

    The second one is a hack the user might build if (1) instrumented SQL logging is very important and (2) we can't change gorm.

    opened by igrayson 4
  • otelgorm feature: pick which query variables are emitted

    otelgorm feature: pick which query variables are emitted

    otelgorm has the WithoutQueryVariables() option to keep the emitted statement unparameterized. I would like to emit statements that are partially parameterized. My use-case is: my service occasionally handles information which I do not want logged. On the other hand, it's really useful being able to see which IDs a database statement actually hit.

    One way to satisfy this is providing something like:

    func WithQueryVariableFilter(predicate func(var interface{}) bool) Option
    

    .. and I could provide a filter that only allows uuid strings (etc)

    Thoughts? I'd be happy to open a PR.

    opened by igrayson 1
Releases(v0.1.17)
Owner
Uptrace
All-in-one tool to optimize performance and monitor errors & logs
Uptrace
Distributed tracing using OpenTelemetry and ClickHouse

Distributed tracing backend using OpenTelemetry and ClickHouse Uptrace is a dist

Uptrace 1.3k Jan 2, 2023
Otelsql - OpenTelemetry SQL database driver wrapper for Go

OpenTelemetry SQL database driver wrapper for Go Add a OpenTelemetry wrapper to

Nhat 37 Dec 15, 2022
Instrumentations of third-party libraries using opentelemetry-go library

OpenTelemetry Go Contributions About This reopsitory hosts instrumentations of the following OpenTelemetry libraries: confluentinc/confluent-kafka-go

eTF1 2 Nov 14, 2022
OpenTelemetry-Go is the Go implementation of OpenTelemetry

OpenTelemetry-Go is the Go implementation of OpenTelemetry. It provides a set of APIs to directly measure performance and behavior of your software and send this data to observability platforms.

OpenTelemetry - CNCF 3.4k Dec 30, 2022
OpenTelemetry log collection library

opentelemetry-log-collection Status This project was originally developed by observIQ under the name Stanza. It has been contributed to the OpenTeleme

OpenTelemetry - CNCF 90 Sep 15, 2022
A CLI tool that generates OpenTelemetry Collector binaries based on a manifest.

OpenTelemetry Collector builder This program generates a custom OpenTelemetry Collector binary based on a given configuration. TL;DR $ go get github.c

OpenTelemetry - CNCF 52 Sep 14, 2022
OpenTelemetry instrumentation for database/sql

otelsql It is an OpenTelemetry instrumentation for Golang database/sql, a port from https://github.com/open-telemetry/opentelemetry-go-contrib/pull/50

Sam Xie 127 Dec 28, 2022
Example instrumentation of Golang Application with OpenTelemetry with supported configurations to export to Sentry.

Sentry + Opentelemetry Go Example Requirements To run this example, you will need a kubernetes cluster. This example has been tried and tested on Mini

Uddeshya Singh 11 Oct 27, 2022
OpenTelemetry integration for Watermill

Watermill OpenTelemetry integration Bringing distributed tracing support to Watermill with OpenTelemetry.

Voi Technology AB 13 Sep 18, 2022
Tool for generating OpenTelemetry tracing decorators.

tracegen Tool for generating OpenTelemetry tracing decorators. Installation go get -u github.com/KazanExpress/tracegen/cmd/... Usage tracegen generate

Marketplace Technologies 5 Apr 7, 2022
OpenTelemetry plugin for GORM v2

gorm-opentelemetry OpenTelemetry plugin for GORM v2 Traces all queries along with the query SQL. Usage Example: // Copyright The OpenTelemetry Authors

ZopSmart 0 Jan 11, 2022
logger wraps uber/zap and trace with opentelemetry

logger 特性 支持 uber/zap 日志 支持 log rolling,使用 lumberjace 支持日志追踪 支持debug、info、warn、e

xiaolei 9 Sep 17, 2022
Shikhandi: a tiny load generator for opentelemetry and heavily

shikhandi is a tiny load generator for opentelemetry and heavily inspired by thi

Srikanth Chekuri 7 Oct 7, 2022
Distributed tracing using OpenTelemetry and ClickHouse

Distributed tracing backend using OpenTelemetry and ClickHouse Uptrace is a dist

Uptrace 1.3k Jan 2, 2023
Rest API to get KVB departures - Written in Go with hexagonal architecture and tracing via OpenTelemetry and Jaeger

KVB API Rest API to get upcoming departures per KVB train station Implemented in Go with hexagonal architecture and tracing via OpenTelemetry and Jaeg

Jan Ritter 3 May 7, 2022
Uptrace - Distributed tracing backend using OpenTelemetry and ClickHouse

Distributed tracing backend using OpenTelemetry and ClickHouse Uptrace is a dist

Rohan 0 Mar 8, 2022
Fibonacci by golang, opentelemetry, jaeger

Fibonacci Technology stack Opentelemetry Jaeger Prometheus Development Run Run docker-compose and main.go: make all Run docker-compose down: make down

null 0 Jan 14, 2022
Otelsql - OpenTelemetry SQL database driver wrapper for Go

OpenTelemetry SQL database driver wrapper for Go Add a OpenTelemetry wrapper to

Nhat 37 Dec 15, 2022
A simple scraper to export data from buildkite to honeycomb using opentelemetry SDK

A quick scraper program that let you export builds on BuildKite as OpenTelemetry data and then send them to honeycomb.io for slice-n-dice high cardinality analysis.

Son Luong Ngoc 3 Jul 7, 2022
OpenTelemetry auto-instrumentation for Go applications

OpenTelemetry Auto-Instrumentation for Go This project adds OpenTelemetry instrumentation to Go applications without having to modify their source cod

keyval 203 Dec 18, 2022