Elastic is an Elasticsearch client for the Go programming language.

Related tags

go elasticsearch
Overview

Elastic

This is a development branch that is actively being worked on. DO NOT USE IN PRODUCTION! If you want to use stable versions of Elastic, please use Go modules for the 7.x release (or later) or a dependency manager like dep for earlier releases.

Elastic is an Elasticsearch client for the Go programming language.

Build Status Godoc license

See the wiki for additional information about Elastic.

Buy Me A Coffee

Releases

The release branches (e.g. release-branch.v7) are actively being worked on and can break at any time. If you want to use stable versions of Elastic, please use Go modules.

Here's the version matrix:

Elasticsearch version Elastic version Package URL Remarks
7.x                   7.0             github.com/olivere/elastic/v7 (source doc) Use Go modules.
6.x                   6.0             github.com/olivere/elastic (source doc) Use a dependency manager (see below).
5.x 5.0 gopkg.in/olivere/elastic.v5 (source doc) Actively maintained.
2.x 3.0 gopkg.in/olivere/elastic.v3 (source doc) Deprecated. Please update.
1.x 2.0 gopkg.in/olivere/elastic.v2 (source doc) Deprecated. Please update.
0.9-1.3 1.0 gopkg.in/olivere/elastic.v1 (source doc) Deprecated. Please update.

Example:

You have installed Elasticsearch 7.0.0 and want to use Elastic. As listed above, you should use Elastic 7.0 (code is in release-branch.v7).

To use the required version of Elastic in your application, you should use Go modules to manage dependencies. Make sure to use a version such as 7.0.0 or later.

To use Elastic, import:

import "github.com/olivere/elastic/v7"

Elastic 7.0

Elastic 7.0 targets Elasticsearch 7.x which was released on April 10th 2019.

As always with major version, there are a lot of breaking changes. We will use this as an opportunity to clean up and refactor Elastic, as we already did in earlier (major) releases.

Elastic 6.0

Elastic 6.0 targets Elasticsearch 6.x which was released on 14th November 2017.

Notice that there are a lot of breaking changes in Elasticsearch 6.0 and we used this as an opportunity to clean up and refactor Elastic as we did in the transition from earlier versions of Elastic.

Elastic 5.0

Elastic 5.0 targets Elasticsearch 5.0.0 and later. Elasticsearch 5.0.0 was released on 26th October 2016.

Notice that there are will be a lot of breaking changes in Elasticsearch 5.0 and we used this as an opportunity to clean up and refactor Elastic as we did in the transition from Elastic 2.0 (for Elasticsearch 1.x) to Elastic 3.0 (for Elasticsearch 2.x).

Furthermore, the jump in version numbers will give us a chance to be in sync with the Elastic Stack.

Elastic 3.0

Elastic 3.0 targets Elasticsearch 2.x and is published via gopkg.in/olivere/elastic.v3.

Elastic 3.0 will only get critical bug fixes. You should update to a recent version.

Elastic 2.0

Elastic 2.0 targets Elasticsearch 1.x and is published via gopkg.in/olivere/elastic.v2.

Elastic 2.0 will only get critical bug fixes. You should update to a recent version.

Elastic 1.0

Elastic 1.0 is deprecated. You should really update Elasticsearch and Elastic to a recent version.

However, if you cannot update for some reason, don't worry. Version 1.0 is still available. All you need to do is go-get it and change your import path as described above.

Status

We use Elastic in production since 2012. Elastic is stable but the API changes now and then. We strive for API compatibility. However, Elasticsearch sometimes introduces breaking changes and we sometimes have to adapt.

Having said that, there have been no big API changes that required you to rewrite your application big time. More often than not it's renaming APIs and adding/removing features so that Elastic is in sync with Elasticsearch.

Elastic has been used in production starting with Elasticsearch 0.90 up to recent 7.x versions. We recently switched to GitHub Actions for testing. Before that, we used Travis CI successfully for years).

Elasticsearch has quite a few features. Most of them are implemented by Elastic. I add features and APIs as required. It's straightforward to implement missing pieces. I'm accepting pull requests :-)

Having said that, I hope you find the project useful.

Getting Started

The first thing you do is to create a Client. The client connects to Elasticsearch on http://127.0.0.1:9200 by default.

You typically create one client for your app. Here's a complete example of creating a client, creating an index, adding a document, executing a search etc.

An example is available here.

Here's a link to a complete working example for v6.

Here are a few tips on how to get used to Elastic:

  1. Head over to the Wiki for detailed information and topics like e.g. how to add a middleware or how to connect to AWS.
  2. If you are unsure how to implement something, read the tests (all _test.go files). They not only serve as a guard against changes, but also as a reference.
  3. The recipes contains small examples on how to implement something, e.g. bulk indexing, scrolling etc.

API Status

Document APIs

  • Index API
  • Get API
  • Delete API
  • Delete By Query API
  • Update API
  • Update By Query API
  • Multi Get API
  • Bulk API
  • Reindex API
  • Term Vectors
  • Multi termvectors API

Search APIs

  • Search
  • Search Template
  • Multi Search Template
  • Search Shards API
  • Suggesters
    • Term Suggester
    • Phrase Suggester
    • Completion Suggester
    • Context Suggester
  • Multi Search API
  • Count API
  • Validate API
  • Explain API
  • Profile API
  • Field Capabilities API

Aggregations

  • Metrics Aggregations
    • Avg
    • Boxplot (X-pack)
    • Cardinality
    • Extended Stats
    • Geo Bounds
    • Geo Centroid
    • Matrix stats
    • Max
    • Median absolute deviation
    • Min
    • Percentile Ranks
    • Percentiles
    • Rate (X-pack)
    • Scripted Metric
    • Stats
    • String stats (X-pack)
    • Sum
    • T-test (X-pack)
    • Top Hits
    • Top metrics (X-pack)
    • Value Count
    • Weighted avg
  • Bucket Aggregations
    • Adjacency Matrix
    • Auto-interval Date Histogram
    • Children
    • Composite
    • Date Histogram
    • Date Range
    • Diversified Sampler
    • Filter
    • Filters
    • Geo Distance
    • Geohash Grid
    • Geotile grid
    • Global
    • Histogram
    • IP Range
    • Missing
    • Nested
    • Parent
    • Range
    • Rare terms
    • Reverse Nested
    • Sampler
    • Significant Terms
    • Significant Text
    • Terms
    • Variable width histogram
  • Pipeline Aggregations
    • Avg Bucket
    • Bucket Script
    • Bucket Selector
    • Bucket Sort
    • Cumulative cardinality (X-pack)
    • Cumulative Sum
    • Derivative
    • Extended Stats Bucket
    • Inference bucket (X-pack)
    • Max Bucket
    • Min Bucket
    • Moving Average
    • Moving function
    • Moving percentiles (X-pack)
    • Normalize (X-pack)
    • Percentiles Bucket
    • Serial Differencing
    • Stats Bucket
    • Sum Bucket
  • Aggregation Metadata

Indices APIs

  • Create Index
  • Delete Index
  • Get Index
  • Indices Exists
  • Open / Close Index
  • Shrink Index
  • Rollover Index
  • Put Mapping
  • Get Mapping
  • Get Field Mapping
  • Types Exists
  • Index Aliases
  • Update Indices Settings
  • Get Settings
  • Analyze
    • Explain Analyze
  • Index Templates
  • Indices Stats
  • Indices Segments
  • Indices Recovery
  • Indices Shard Stores
  • Clear Cache
  • Flush
    • Synced Flush
  • Refresh
  • Force Merge

Index Lifecycle Management APIs

  • Create Policy
  • Get Policy
  • Delete Policy
  • Move to Step
  • Remove Policy
  • Retry Policy
  • Get Ilm Status
  • Explain Lifecycle
  • Start Ilm
  • Stop Ilm

cat APIs

  • cat aliases
  • cat allocation
  • cat count
  • cat fielddata
  • cat health
  • cat indices
  • cat master
  • cat nodeattrs
  • cat nodes
  • cat pending tasks
  • cat plugins
  • cat recovery
  • cat repositories
  • cat thread pool
  • cat shards
  • cat segments
  • cat snapshots
  • cat templates

Cluster APIs

  • Cluster Health
  • Cluster State
  • Cluster Stats
  • Pending Cluster Tasks
  • Cluster Reroute
  • Cluster Update Settings
  • Nodes Stats
  • Nodes Info
  • Nodes Feature Usage
  • Remote Cluster Info
  • Task Management API
  • Nodes hot_threads
  • Cluster Allocation Explain API

Query DSL

  • Match All Query
  • Inner hits
  • Full text queries
    • Match Query
    • Match Phrase Query
    • Match Phrase Prefix Query
    • Multi Match Query
    • Common Terms Query
    • Query String Query
    • Simple Query String Query
  • Term level queries
    • Term Query
    • Terms Query
    • Terms Set Query
    • Range Query
    • Exists Query
    • Prefix Query
    • Wildcard Query
    • Regexp Query
    • Fuzzy Query
    • Type Query
    • Ids Query
  • Compound queries
    • Constant Score Query
    • Bool Query
    • Dis Max Query
    • Function Score Query
    • Boosting Query
  • Joining queries
    • Nested Query
    • Has Child Query
    • Has Parent Query
    • Parent Id Query
  • Geo queries
    • GeoShape Query
    • Geo Bounding Box Query
    • Geo Distance Query
    • Geo Polygon Query
  • Specialized queries
    • Distance Feature Query
    • More Like This Query
    • Script Query
    • Script Score Query
    • Percolate Query
  • Span queries
    • Span Term Query
    • Span Multi Term Query
    • Span First Query
    • Span Near Query
    • Span Or Query
    • Span Not Query
    • Span Containing Query
    • Span Within Query
    • Span Field Masking Query
  • Minimum Should Match
  • Multi Term Query Rewrite

Modules

  • Snapshot and Restore
    • Repositories
    • Snapshot get
    • Snapshot create
    • Snapshot delete
    • Restore
    • Snapshot status
    • Monitoring snapshot/restore status
    • Stopping currently running snapshot and restore
  • Scripting
    • GetScript
    • PutScript
    • DeleteScript

Sorting

  • Sort by score
  • Sort by field
  • Sort by geo distance
  • Sort by script
  • Sort by doc

Scrolling

Scrolling is supported via a ScrollService. It supports an iterator-like interface. The ClearScroll API is implemented as well.

A pattern for efficiently scrolling in parallel is described in the Wiki.

How to contribute

Read the contribution guidelines.

Credits

Thanks a lot for the great folks working hard on Elasticsearch and Go.

Elastic uses portions of the uritemplates library by Joshua Tacoma, backoff by Cenk Altı and leaktest by Ian Chiles.

LICENSE

MIT-LICENSE. See LICENSE or the LICENSE file provided in the repository for details.

Issues
  • "No ElasticSearch Node Available"

    Please use the following questions as a guideline to help me answer your issue/question without further inquiry. Thank you.

    Which version of Elastic are you using?

    [ ] elastic.v2 (for Elasticsearch 1.x) [x ] elastic.v3 (for Elasticsearch 2.x)

    Please describe the expected behavior

    NewClient(elastic.SetURL("http://:9200") would correctly generate a new Client object connecting to the node

    Please describe the actual behavior

    "no ElasticSearch node available"

    Any steps to reproduce the behavior?

    elastic.NewClient(elastic.SetURL("http://:9200")

    opened by nicolaifsf 71
  • Bulk Processor

    Bulk Processor

    opened by olivere 51
  • Problems on connect

    Problems on connect

    @dashaus, I copied it over from #57:

    Hi, I have the same problem here:

    panic: main: conn db: no Elasticsearch node available

    goroutine 1 [running]:
    log.Panicf(0x84de50, 0x11, 0xc2080c7e90, 0x1, 0x1)
        /usr/local/go/src/log/log.go:314 +0xd0
    main.init·1()
        /Users/emilio/go/src/monoculum/init.go:40 +0x348
    main.init()
        /Users/emilio/go/src/monoculum/main.go:334 +0xa4
    
    goroutine 526 [select]:
    net/http.(*persistConn).roundTrip(0xc2088ad1e0, 0xc2086a9d50, 0x0, 0x0, 0x0)
    20:30:13 app         |  /usr/local/go/src/net/http/transport.go:1082 +0x7ad
    net/http.(*Transport).RoundTrip(0xc20806c000, 0xc2086f6000, 0xc20873ff50, 0x0, 0x0)
    20:30:13 app         |  /usr/local/go/src/net/http/transport.go:235 +0x558
    20:30:13 app         | net/http.send(0xc2086f6000, 0xed4f18, 0xc20806c000, 0x21, 0x0, 
    20:30:13 app         | 0x0)
        /usr/local/go/src/net/http/client.go:219
    20:30:13 app         |  +0x4fc
    net/http.(*Client).send(0xc08b00, 0xc2086f6000, 0x21
    20:30:13 app         | , 0x0, 0x0)
        /usr/local/go/src/net/http/client.go:142 +0x15b
    20:30:13 app         | net/http.(*Client).doFollowingRedirects(0xc08b00, 0xc2086f6000, 0x97cd00, 0x0, 0x0, 0x0)
    20:30:13 app         |  /usr/local/go/src/net/http/client.go:367 +0xb25
    net/http.(*Client).Do(0xc08b00, 0xc2086f6000, 0xc20873fce0, 0x0, 
    20:30:13 app         | 0x0)
        /usr/local/go/src/net/http/client.go
    20:30:13 app         | :174 +0xa4
    github.com/olivere/elastic.(*Client).sniffNode(0xc208659d10, 0xc208569920, 0x15
    20:30:13 app         | , 0x0, 0x0, 0x0)
    20:30:13 app         |  /Users/emilio/go/src/github.com/olivere/elastic/client.go:543
    20:30:13 app         |  +0x16a
    20:30:13 app         | 
    github.com/olivere/elastic.func·014(0xc208569920, 0x15
    20:30:13 app         | )
        /Users/emilio/go/src/github.com/olivere/elastic/client.go:508 +0x47
    20:30:13 app         | created by github.com/olivere/elastic.(*Client).sniff
        /Users/emilio/go/src/github.com/olivere/elastic/client.go:508 +0x744
    
    goroutine 525 [chan receive]:
    20:30:13 app         | database/sql.(*DB).connectionOpener(0xc2086de960)
        /usr/local/go/src/database/sql/sql.go:589 +0x4c
    created by database/sql.Open
        /usr/local/go/src/database/sql/sql.go:452 +0x31c
    
    goroutine 529 [IO wait]:
    20:30:13 app         | net.(*pollDesc).Wait(0xc2084fe370, 0x72, 0x0
    20:30:13 app         | , 
    20:30:13 app         | 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:84 +0x47
    net.(*pollDesc).WaitRead(0xc2084fe370, 0x0, 0x0)
        /usr/local/go/src/net/fd_poll_runtime.go:89 +0x43
    net.(*netFD).Read(0xc2084fe310, 0xc208709000, 0x1000, 0x1000, 0x0, 0xed4d48, 0xc2086a9ec8)
        /usr/local/go/src/net/fd_unix.go:242 +0x40f
    net.(*conn).Read(0xc20896a800, 0xc208709000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/net.go:121 +0xdc
    net/http.noteEOFReader.Read(0xef0410, 0xc20896a800, 0xc2088ad238, 0xc208709000, 0x1000, 0x1000, 0xeb7010, 0x0, 0x0)
        /usr/local/go/src/net/http/transport.go:1270 +0x6e
    net/http.(*noteEOFReader).Read(0xc208569b40, 0xc208709000, 0x1000, 0x1000, 0xc207f6957f, 0x0, 0x0)
        <autogenerated>:125 +0xd4
    bufio.(*Reader).fill(0xc2088f3c80)
        /usr/local/go/src/bufio/bufio.go:97 +0x1ce
    bufio.(*Reader).Peek(0xc2088f3c80, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/bufio/bufio.go:132 +0xf0
    net/http.(*persistConn).readLoop(0xc2088ad1e0)
        /usr/local/go/src/net/http/transport.go:842 +0xa4
    created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:660 +0xc9f
    
    goroutine 530 [select]:
    net/http.(*persistConn).writeLoop(0xc2088ad1e0)
        /usr/local/go/src/net/http/transport.go:945 +0x41d
    created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:661 +0xcbc
    

    This occurs sometimes... not always...

    curl -XGET 127.0.0.1:9200/_nodes/http?pretty=1
    {
      "cluster_name" : "elasticsearch",
      "nodes" : {
        "3l_Ing0oSfWu5U63US5kxg" : {
          "name" : "Rattler",
          "transport_address" : "inet[192.168.1.91/192.168.1.91:9300]",
          "host" : "Mac-Emilio",
          "ip" : "192.168.1.91",
          "version" : "1.3.4",
          "build" : "a70f3cc",
          "http_address" : "inet[/192.168.1.91:9200]",
          "http" : {
            "bound_address" : "inet[/0:0:0:0:0:0:0:0:9200]",
            "publish_address" : "inet[/192.168.1.91:9200]",
            "max_content_length_in_bytes" : 104857600
          }
        }
      }
    }
    
    opened by olivere 41
  • Problems With Sniffing

    Problems With Sniffing

    I'm running Elasticsearch v1.4.4 in a Docker container. I kept having trouble getting the client to work properly. I was trying to run the sample in the README (obviously pointing to my Docker container instead of localhost). It was taking ~30 seconds to create the client, and then would fail to create the index with the error: no Elasticsearch node available.

    As soon as I set turned off sniffing when creating the client (elastic.SetSniff(false)), everything worked perfectly. It doesn't really bother me that I have to turn sniffing off, but I wanted to put this issue out to see if anyone else had seen an issue like this.

    P.S. @olivere - The documentation is awesome! :+1:

    opened by blachniet 32
  • Can't put document into AWS ES service.

    Can't put document into AWS ES service.

    Which version of Elastic are you using?

    elastic.v2 (for Elasticsearch 1.x)

    Please describe the expected behavior

    Successful document put into the index.

    Please describe the actual behavior

    Error is returned:

    elastic: Error 403 (Forbidden)
    

    Any steps to reproduce the behavior?

    Setup:

        creds := credentials.NewEnvCredentials()
        signer := v4.NewSigner(creds)
        awsClient, err := aws_signing_client.New(signer, nil, "es", "us-west-2")
        if err != nil {
            return nil, err
        }
    
        return elastic.NewClient(
            elastic.SetURL(...),
            elastic.SetScheme("https"),
            elastic.SetHttpClient(awsClient),
            elastic.SetSniff(false),
        )
    

    Put:

        _, err = e.Client.Index().Index(indexName).Type(indexType).
            Id(doc.ID).
            BodyJson(doc).
            Do()
    

    Not sure if this is elastic or aws_signing_client issue.

    opened by mthenw 26
  • cannot go import elastic.v5.  v7 import error

    cannot go import elastic.v5. v7 import error

    Please use the following questions as a guideline to help me answer your issue/question without further inquiry. Thank you.

    Which version of Elastic are you using?

    [ ] elastic.v5 (for Elasticsearch 5.x)

    Please describe the expected behavior

    import elastic v5 verson

    Please describe the actual behavior

    import elastic v7 version and cannot find package such as below cannot find package "github.com/olivere/elastic/v7/config"

    Any steps to reproduce the behavior?

    opened by ghost 26
  • Default branch, release-branch.v6 has some compile/import problems

    Default branch, release-branch.v6 has some compile/import problems

    I am about to switch to go modules but it appears the branch release-branch.v6 has some import problems when you just try to use it using old-fashioned GOPATH...

    When you compile you get:

    /go/src/github.com/olivere/elastic/client.go:24:2: cannot find package "github.com/olivere/elastic/v6/config" in any of:
            /usr/local/go/src/github.com/olivere/elastic/v6/config (from $GOROOT)
            /go/src/github.com/olivere/elastic/v6/config (from $GOPATH)
    /go/src/github.com/olivere/elastic/bulk.go:14:2: cannot find package "github.com/olivere/elastic/v6/uritemplates" in any of:
            /usr/local/go/src/github.com/olivere/elastic/v6/uritemplates (from $GOROOT)
            /go/src/github.com/olivere/elastic/v6/uritemplates (from $GOPATH)
    

    I realize you recommend to use a dependency manager however prior to your latest update it still worked okay.

    opened by snowzach 25
  • When the context is cancelled the node is marked dead

    When the context is cancelled the node is marked dead

    Version

    elastic.v5 (for Elasticsearch 5.x)

    How to reproduce:

    package main
    
    import (
    	"context"
    	"gopkg.in/olivere/elastic.v5"
    	"log"
    	"os"
    	"time"
    )
    
    func main() {
    
    	var err error
    
    	client, err := elastic.NewClient(
    		elastic.SetURL("https://httpbin.org/delay/3?"), // every request will take about 3 seconds
    		elastic.SetHealthcheck(false),
    		elastic.SetSniff(false),
    		elastic.SetErrorLog(log.New(os.Stderr, "", log.LstdFlags)),
    		elastic.SetInfoLog(log.New(os.Stdout, "", log.LstdFlags)),
    	)
    	if err != nil {
    		log.Fatal(err)
    	}
    
    	ctx, _ := context.WithTimeout(context.Background(), 1*time.Second) // requests will time out after 1 second
    
    	log.Println("Running request")
    
    	_, err = client.Get().Index("whatever").Id("1").Do(ctx)
    
    	if err != nil {
    		log.Println("Error: " + err.Error())
    	}
    
    	log.Println("Running second request")
    
    	_, err = client.Get().Index("whatever").Id("1").Do(ctx)
    
    	if err != nil {
    		log.Println("Error: " + err.Error())
    	}
    
    }
    

    Actual

    2017/03/17 08:02:33 Running request
    2017/03/17 08:02:34 elastic: https://httpbin.org/delay/3? is dead
    2017/03/17 08:02:34 Error: context deadline exceeded
    2017/03/17 08:02:34 Running second request
    2017/03/17 08:02:34 elastic: all 1 nodes marked as dead; resurrecting them to prevent deadlock
    2017/03/17 08:02:34 Error: no Elasticsearch node available
    

    Expected

    Something like (I edited that "log" myself):

    2017/03/17 08:02:33 Running request
    2017/03/17 08:02:34 Error: context deadline exceeded
    2017/03/17 08:02:34 Running second request
    2017/03/17 08:02:37 GET https://httpbin.org/delay/3?/whatever/_all/1 [status:200, request:3.500s]
    
    opened by AndreKR 24
  • Pattern for unit testing with interfaces?

    Pattern for unit testing with interfaces?

    Hi!

    Apologies if this is an already answered question, I was unable to find a satisfactory answer online. I am trying to find a way to write unit tests for one of my services however I feel this example could apply outside of unit tests to more general encapsulation of code.

    I want to mock the client so I can simulate a specific type of request (in my case bulk requests) without going all the way to a test ElasticSearch instance. Ideally there would be an interface to allow me to generate a mock in my tests. For example what I want is an interface like so:

    type IBulkClient interface {
        Bulk() IBulkService // return the BulkService interface
        ...
    }
    
    type IBulkService interface {
       Add(requests ...E.BulkableRequest) IBulkService
       Do() (IBulkResponse, error) // return the BulkResponse interface (not shown here)
       ...
    }
    

    This would allow me to mock BulkClient and BulkService to better test my code. The reason why I can't do this myself right now is that the real BulkService.Add() returns a *BulkService which screws up the interface as I want my IBulkClient to return another interface not a pointer to a struct.

    Here is a go playground with the issue I am talking about and here is it working with an interface reference rather than a pointer to a struct.

    My ultimate question is this: Is it possible for the API to provide interfaces for all its structs? This would allow for better unit testing and also allow the user to better encapsulate their code. If there is a reason why there shouldn't be interfaces how do you recommend writing unit tests that don't go all the way to ElasticSearch or mocking the response at an http level?

    opened by grindlemire 23
  • how to get search result full raw json?

    how to get search result full raw json?

    Please use the following questions as a guideline to help me answer your issue/question without further inquiry. Thank you.

    Which version of Elastic are you using?

    elastic.v5 (for Elasticsearch 5.x)

    Please describe the expected behavior

    searchResult,err:= client().search().....Do(...)

    searchResult.RawJson() // get full result raw json

    Please describe the actual behavior

    Did not find this method

    Any steps to reproduce the behavior?

    // Search with a term query termQuery := elastic.NewTermQuery("user", "olivere") searchResult, err := client.Search(). Index("twitter"). // search in index "twitter" Query(termQuery). // specify the query Sort("user", true). // sort by "user" field, ascending From(0).Size(10). // take documents 0-9 Pretty(true). // pretty print request and response JSON Do(context.Background()) // execute if err != nil { // Handle error panic(err) }

    opened by zplzpl 22
  • Add context to logger

    Add context to logger

    This commit adds a LoggerWithContext interface that extends the Logger interface by a method PrintfWithContext that, when implemented, is called instead of the Printf method of the Logger interface.

    The purpose of PrintfWithContext is to receive the current context under which the logging happens. Notice that this doesn't always have to be request-scoped, i.e. an actual API call from a user. It may also be from an internal state or process, e.g. Bulk processor or node health.

    Close #1541

    opened by olivere 0
  • logger with context ?

    logger with context ?

    Which version of Elastic are you using?

    [ x ] elastic.v7 (for Elasticsearch 7.x) [ ] elastic.v6 (for Elasticsearch 6.x) [ ] elastic.v5 (for Elasticsearch 5.x) [ ] elastic.v3 (for Elasticsearch 2.x) [ ] elastic.v2 (for Elasticsearch 1.x)

    Please describe the expected behavior

    Can Logger support Context ? Because it's usefully for tracing,add requestId etc for a request.

    Please describe the actual behavior

    Any steps to reproduce the behavior?

    opened by lichunqiang 6
  • Node Stats API in 7.15 broken

    Node Stats API in 7.15 broken

    Seems like the Node Stats API in 7.15.0 is broken due to this change (reported here). See also https://github.com/elastic/elasticsearch/issues/78311.

    How to reproduce:

    1. Use 7.15.0
    2. Run the Node Stats API with level=shards: $ curl -H 'Content-Type: application/json' -XGET 'http://localhost:9200/_nodes/stats?level=shards&human=true&pretty=true'

    Expected outcome:

    All JSON keys on the same level are different.

    Actual response:

    You'll get a duplicate JSON key for shards:

          ...
          "indices" : {
            "docs" : {
              "count" : 48,
              "deleted" : 0
            },
            "shards" : {                   // <--
              "total_count" : 2
            },
            ...
            "shards" : {                   // <--
              ".tasks" : [
                {
                  "0" : {
                    "routing" : {
                      "state" : "STARTED",
                      "primary" : true,
                      "node" : "f8Z10TwPSKqTNsnnJiFRtg",
                      "relocating_node" : null
                    },
                    "docs" : {
                      "count" : 5,
                      "deleted" : 0
                    },
              "memory_size_in_bytes" : 0,
              "evictions" : 0
            },
            ...
    
    bug 
    opened by olivere 0
  • Announcement: Future directions

    Announcement: Future directions

    Writing this in an issue is probably not the right way, but at least it's near the artifact that matters most to me—the code.

    Disclaimer: Please consider this as my very own personal view. I'm a complete outsider, have no internal information and no investments in either Amazon or Elastic. I'm just a dev with an opinion.

    I've started this project in 2012. First commit was on December 6, 2012. That was the same year that Searchworkings Global, the predecessor of the company we now know as Elastic, was founded, and way before Elasticsearch 1.0.0 was a thing (that was Feb 12, 2014). Since then this library supported all versions of Elasticsearch with the help of many contributors, but also because I ❤️ Elasticsearch and Go and I was using it extensively in my projects and at work. I've probably invested thousands of hours into building and supporting it. I've tried very hard to not break things. Elasticsearch has never disappointed me (and it made us faster—literally ;-)). My deepest respect and thanks to all the developers that made it such a success (and good companion). But in the end, I think, I wrote the library for myself—because I'm a techie at heart.

    Over the recent months though, a strange controversy arose between Elastic and Amazon over the use of Elasticsearch, the technology.

    First of all I understand both sides. But still I'm disappointed about the end result for us developers. On the one hand, there's Amazon offering Elasticsearch under their brand, now forking and rebranding it under the term OpenSearch. As a developer, I think Amazon shouldn't have done it the way they did. But it's Amazon. You and me have to decide if their way of doing business is having a positive impact on all of us in the long run. On the other hand, in a move to protect their stakes, Elastic introduced a new license and made technical changes to their API clients. Although it's quite easy to work around it, again—as a developer—I can't see this heading into the right direction.

    But let's skip the legal issues. I've been using the official client for Go in my last project. It works well, it's code base is clean, and the GitHub repository is well maintained. So no offences whatsoever at the developers working on it. You're doing a good job. But. Maybe I'm just biased. But I don't like working with it. The most critical issue for me is it's lack of having a nice way of building queries, aggregates, and parsing responses. My codebase is filled with map[string]interface{}. It just doesn't feel right to me. I know the good people at Elastic know about this, and one day, they'll offer an API for requests and responses, so that one can generate these. But right now, I miss that. I miss it a lot.

    So a lot of words for saying that I decided to continue working on the library, v7 and beyond, even if it's just for me. I want it to be usable and enjoyable to work with, for developers like me. I want it to keep out of any gorilla fights. And I sincerely hope that's possible. I want it to be my library of choice for working with Elasticsearch, the technology, regardless of the context in which you're using it.

    EDIT: This announcement relates to my statement in this comment.

    opened by olivere 1
  • Request with Transfer-Encoding chunked instead of Content-Length

    Request with Transfer-Encoding chunked instead of Content-Length

    Which version of Elastic are you using?

    [X] elastic.v7 (for Elasticsearch 7.x) [ ] elastic.v6 (for Elasticsearch 6.x) [ ] elastic.v5 (for Elasticsearch 5.x) [ ] elastic.v3 (for Elasticsearch 2.x) [ ] elastic.v2 (for Elasticsearch 1.x)

    Please describe the expected behavior

    ELK search sends an HTTP request with a "Content-Length: nnn" header.

    Please describe the actual behavior

    ELK search sends an HTTP request with a "Transfer-Encoding: chunked" header. Transfer-Encoding chunked can be used in a HTTP request but is rarely used and this can cause problems to other tools.

    Any steps to reproduce the behavior?

    Reproduced in any search request.

    The problem is in request.go, in func setBodyHeader(). When called in setBodyJson() or setBodyGzip(), the body is a bytes.Reader, then it must be converted to a bytes.Reader and not a bytes.Buffer to get the ContentLength.

    opened by olivier4576 0
  • (not a bug) How to handle an intermittently down elasticsearch cluster?

    (not a bug) How to handle an intermittently down elasticsearch cluster?

    I have a service that uses elasticsearch and currently panics when initializing if the elastic client cannot be initialized. I'd prefer the service to be able to start without elasticsearch, and then keep trying to initialize the client in the background.

    Is this a reasonable approach? Or should I be using the Simple Client and initialize that for every request that requires elasticsearch?

    I see in the documentation it says that the Simple Client doesn't have sniffing and health checks. Do I need these if I require routing through coordinating-only nodes on kubernetes? My understanding is that sniffing will find the nodes that can be connected to in the cluster, but I'm not sure if that is limited to the coordinating-only nodes if I have set up coordinating-only nodes. If that is not the case, then will I have to disable sniffing? (or just use the simple client on a request basis?)

    enhancement question 
    opened by henryhsue 1
  • Add option to close idle connections for dead nodes

    Add option to close idle connections for dead nodes

    This commit adds a configuration option SetCloseIdleConnections. The effect of enabling it is that whenever the Client finds a dead node, it will call CloseIdleConnections on the underlying HTTP transport.

    This is useful for e.g. AWS Elasticsearch Service. When AWS ES reconfigures the cluster, it may change the underlying IP addresses while keeping the DNS entry stable. If the Client would not close idle connections, the underlying HTTP client would re-use existing HTTP connections and use the old IP addresses. See #1091 for a discussion of this problem.

    The commit also illustrates how to connect to an AWS ES cluster in the recipes in recipes/aws-mapping-v4 and recipts/aws-es-client. See the ConnectToAWS method for a blueprint of how to connect to an AWS ES cluster.

    See #1091

    opened by olivere 7
  • Reduce memory usage on bigger search hits result

    Reduce memory usage on bigger search hits result

    Please use the following questions as a guideline to help me answer your issue/question without further inquiry. Thank you.

    Which version of Elastic are you using?

    [ ] elastic.v7 (for Elasticsearch 7.x) [x] elastic.v6 (for Elasticsearch 6.x) [ ] elastic.v5 (for Elasticsearch 5.x) [ ] elastic.v3 (for Elasticsearch 2.x) [ ] elastic.v2 (for Elasticsearch 1.x)

    Please describe the expected behavior

    I'm asking suggestion how to reduce the memory, I'm trying to load 500 hits, and it seems that the profiling give me a high memory usage on (*SearchService) Do

    pprof

    client := c.Client.Search().
    		Index(indexName).
    		Query(boolQuery).
    		From(0).
    		Size(500).
    		Pretty(true)
    
    	document, err := client.Do(context.Background())
    	if err != nil {
    		defer c.Client.Stop()
    		return nil, err
    	}
    

    Please describe the actual behavior

    High memory usage

    Any steps to reproduce the behavior?

    opened by kh411d 1
  • Fix wrong url check

    Fix wrong url check

    url.Parse cannot check the validity of the url. Here is a simple uinttest function.

    func TestCheckURL(t *testing.T) {
    	//The value is true if the url is valid
    	urls := map[string]bool{
    		"http://elastic:[email protected]:9210": true,
    		"https://google.com":                    true,
    		"http://google.com/":                    true,
    		"http:/google.com":                      true,
    		"google.com":                            false,
    		"google/com":                            false,
    		"http//google.com":                      false,
    		"":                                      false,
    	}
    
    	// http.Parse, all urls are right
    	for u, _ := range urls {
    		_, err := url.Parse(u)
    		assert.Nil(t, err)
    	}
    
    	// http.ParseRequestURI, some are right, and some not.
    	for u, b := range urls {
    		_, err := url.ParseRequestURI(u)
    		if b {
    			assert.Nil(t, err)
    		} else {
    			assert.NotNil(t, err, u)
    		}
    	}
    }
    
    === RUN   TestCheckURL
    --- PASS: TestCheckURL (0.00s)
    PASS
    
    opened by bestgopher 3
  • data stream API

    data stream API

    Which version of Elastic are you using?

    [x] elastic.v7 (for Elasticsearch 7.x)

    Please describe the expected behavior

    I'm pretty new to this library. But our team is heavily working with data streams. Just would like to know if there is any plan to support the data stream APIs in this library.

    https://www.elastic.co/guide/en/elasticsearch/reference/current/set-up-a-data-stream.html https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-get-data-stream.html

    feature 
    opened by taylorzhangyx 1
Releases(v7.0.29)
  • v7.0.29(Sep 17, 2021)

    • GeoBoundingBoxQuery now updated to latest release. It now e.g. supports Geo hashes and WKT (#1530).
    • Add support for XPack Rollup API (#1531)

    See here for details.

    Source code(tar.gz)
    Source code(zip)
  • v7.0.28(Aug 30, 2021)

    This release fixes a number of bugs and adds some features added in recent versions.

    • Add runtime fields/mappings #1527
    • Allow Point In Time API without keep alive #1524

    See the 7.0.28 milestone for details.

    Source code(tar.gz)
    Source code(zip)
  • v7.0.25(Jun 16, 2021)

    This release fixes a number of bugs and adds some features added in recent versions.

    • Fix response to Cluster Stats API. #1494
    • Add case_sensitivity to various queries. #1496
    • Add Component Template APIs #1458
    • Add Multi Terms aggregation #1499
    • Add Match Boolean Prefix Query #1497
    • Add Top Metrics aggregation #1500

    See the 7.0.25 milestone for details.

    Source code(tar.gz)
    Source code(zip)
  • v5.0.64(Feb 14, 2018)

  • v6.1.7(Feb 14, 2018)

  • v5.0.63(Feb 13, 2018)

    • Add back pressure to BulkProcessor (#698) (3d6edfd18fe1356ee361bbd4f26cf8414f43f9d8)
    • Fix MultiSearch API (#705) (8e15c5845ff64e2a3cce5bbd789cd4fa07846a87)
    Source code(tar.gz)
    Source code(zip)
  • v6.1.6(Feb 13, 2018)

  • v5.0.62(Jan 18, 2018)

    • Allow retrieval of _source with Bulk Update API (ba99e50ca0e7302b7d443912f884816c8c629132)
    • Add Field Capabilities API (d334c06311cb953a4da3b6ade37fdd8b6e29a6e3); Field Stats API is deprecated since 5.4.0
    • Add Indices Segments API (3a5e0f6b59d7263e8daed2902427ddc446225c30)
    • Fix Task Lists API to accept strings instead of int64 values (07bee6061013afcf0fe1f68929cc4eb87ca08d92)
    Source code(tar.gz)
    Source code(zip)
  • v6.1.3(Jan 18, 2018)

    • Allow retrieval of _source with Bulk Update API (a2bbdd2baa18ba9062d73ecdc69d38fe6fbc4354)
    • Fix most deprecation warnings (9c377a7a195c63acab68877fe48e4329b7542018)
    • Add Field Capabilities API, remove Field Stats API (3b45f407618ecb5894966ae3bf7e7808a0472a9a)
    • Add Indices Segments API (eaba086cacd4810079e8ada88fca5146a0dacb25)
    • Fix Task Lists API to accept strings instead of int64 values (2a0f106c1969a3849537735b3db38a987299a7f5)
    Source code(tar.gz)
    Source code(zip)
  • v5.0.49(Oct 14, 2017)

  • v5.0.50(Oct 14, 2017)

    • Use Content-Type = "application/nd-json" in Bulk API (2538f05a4b7e19bc521288e92f14cce54efdc770).
    • Add Context suggester (8ad6658f917c831f5d1cedb145cb85fa3952df9b).
    Source code(tar.gz)
    Source code(zip)
  • v5.0.51(Oct 14, 2017)

  • v5.0.45(Sep 15, 2017)

  • v5.0.46(Sep 15, 2017)

    • Allow both query and filter with FunctionScoreQuery (#587)
    • Fix warnings from staticcheck (#581)
    • Fix type of Result field for UpdateResponse (#599 and #600)
    Source code(tar.gz)
    Source code(zip)
  • v5.0.42(Jul 18, 2017)

    • Trace HTTP response even in the case of errors 99e76e. This fixes the long standing issues #297 and #553.
    • Added IsConflict helper (#562).
    • Added generic IsStatusCode(err interface{}, code int) helper to check for various HTTP status codes being returned from Elasticsearch. The helpers IsNotFound(err), IsTimeout(err) and IsConflict(err) use this helper internally.
    • Add ability to fetch the _source of the updated document via FetchSourceContext (957705).
    • Updated the Put Mapping API and remove deprecated IgnoreConflicts setting. Added UpdateAllTypes setting (#558 and fecaf7)
    • Fix inconsistencies between various range aggregations, using intervals via From and To (dbb16b).
    • The Delete API now returns both an error and a response in case of a 404. This reflects what Elasticsearch does (#555 and 801866).
    • Prevent issues with terms query and null (see #554 and f98e1f).
    • Support reindexing from a remote cluster (487418).
    • Add Task Get API.
    • Add reindexing in the background via DoAsync. This is different from Do in that it starts a task in Elasticsearch that is watchable via the Task Get API (see #550, 6aa4cc and 3116ec).
    Source code(tar.gz)
    Source code(zip)
  • v5.0.41(Jun 16, 2017)

  • v2.0.59(Jun 16, 2017)

  • v3.0.69(Jun 16, 2017)

  • v5.0.40(Jun 16, 2017)

  • v5.0.39(May 29, 2017)

    • Add the Snapshot Create API (#533)
    • Fix some changes in the JSON response (#532 and #530)
    • Add Percentiles Bucket pipeline aggregation (#529)
    Source code(tar.gz)
    Source code(zip)
  • v5.0.38(May 29, 2017)

  • v5.0.37(May 6, 2017)

    This release adds a few missing fields to the NestedQuery DSL (see cec324) and the BulkResponseItem (see 42b0e5).

    Furthermore, we now use github.com/pkg/errors to enhance error messages, especially on connection problems (see 879b6d). Before this change, you could only see that there was a problem, but the underlying error wasn't available any more. Starting with this release, the error message now contains the underlying error, and you can even access it with github.com/pkg/errors.

    Notice that with this change in place you should no longer compare err == elastic.ErrNoClient directly. If you want to filter out connection errors, use the elastic.IsConnErr(err) helper instead.

    Source code(tar.gz)
    Source code(zip)
  • v5.0.36(Apr 23, 2017)

  • v5.0.35(Apr 18, 2017)

  • v5.0.34(Apr 14, 2017)

  • v5.0.33(Apr 14, 2017)

  • v5.0.32(Apr 10, 2017)

    • Add TermsLookup to TermsQuery (#500)
    • Add Snapshot Repository API (#508)
    • Remove needless mutex in ExponentialBackoff (#499)
    • Add field collapsing to Search API (#498)
    • Change geo_bbox to geo_bounding_box (#506)
    • Change mlt to more_like_this (#507)
    Source code(tar.gz)
    Source code(zip)
  • v5.0.31(Mar 26, 2017)

    • Nodes got marked as dead when a context is cancelled (#484)
    • Terms aggregation supports multiple order fields (#486)
    • Add Get Field Mapping API (#)
    • Make BulkUpdateRequest.Source a pointer receiver (#491)
    Source code(tar.gz)
    Source code(zip)
  • v2.0.52(Jul 27, 2016)

  • v3.0.45(Jul 27, 2016)

Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch.

go-techLog1C Парсер технологического журнала, основанный на стеке технологий Golang goroutines + Redis + Elasticsearch. Стек является кросс-платформен

Vadim 20 Oct 14, 2021
Vuls Beater for Elasticsearch - connecting vuls

vulsbeat Welcome to vulsbeat.Please push Star. This software allows you Vulnerability scan results of vuls can be imported to Elastic Stack.

kazuminn 14 Jan 17, 2021
Lithia is an experimental functional programming language with an implicit but strong and dynamic type system.

Lithia is an experimental functional programming language with an implicit but strong and dynamic type system. Lithia is designed around a few core concepts in mind all language features contribute to.

Valentin Knabel 5 Oct 15, 2021
Self hosted search engine for data leaks and password dumps

Self hosted search engine for data leaks and password dumps. Upload and parse multiple files, then quickly search through all stored items with the power of Elasticsearch.

Davide Pataracchia 22 Aug 2, 2021
Welcome to the future of programming languages: OK?

OK? Try it out on the playground OK?'s mascot: Quentyn Questionmark. Programming Is Simple Again OK? is a modern, dynamically typed programming langua

Jesse Duffield 71 Oct 14, 2021
The Gorilla Programming Language

Gorilla Programming Language Gorilla is a tiny, dynamically typed, multi-engine programming language It has flexible syntax, a compiler, as well as an

null 29 Sep 10, 2021
Flow-based and dataflow programming library for Go (golang)

GoFlow - Dataflow and Flow-based programming library for Go (golang) Status of this branch (WIP) Warning: you are currently on v1 branch of GoFlow. v1

Vladimir Sibirov 1.3k Oct 16, 2021
👩🏼‍💻A simple compiled programming language

The language is written in Go and the target language is C. The built-in library is written in C too

paco 25 Sep 7, 2021
A library for parallel programming in Go

pargo A library for parallel programming in Go Package pargo provides functions and data structures for expressing parallel algorithms. While Go is pr

null 169 Sep 25, 2021
Examples on different options for implementing Flow Based Programming

Flow Based Programming This repository contains fragments and ideas related to Flow Based Programming. It shows different ways of implementing differe

Egon Elbre 7 Sep 27, 2021
Go specs implemented as a scripting language in Rust.

Goscript A script language like Python or Lua written in Rust, with exactly the same syntax as Go's. The Goal Runs most pure Go code, probably add som

null 839 Oct 17, 2021
Swagger 2.0 implementation for go

Swagger 2.0 This package contains a golang implementation of Swagger 2.0 (aka OpenAPI 2.0): it knows how to serialize and deserialize swagger specific

Go Swagger 6.9k Oct 22, 2021
ghw - Golang HardWare discovery/inspection library

ghw - Golang HardWare discovery/inspection library ghw is a small Golang library providing hardware inspection and discovery for Linux and Windows.

Jay Pipes 1.1k Oct 22, 2021
Core Brightgate Software Stack

Brightgate Product Software Directories Directory Description base/ Resource and Protocol Buffer message definitions build/ Scripts to do with buildin

Brightgate Inc. 2 Sep 25, 2021
The new home of the CUE language! Validate and define text-based and dynamic configuration

The CUE Data Constraint Language Configure, Unify, Execute CUE is an open source data constraint language which aims to simplify tasks involving defin

null 1.1k Oct 22, 2021
📖 A little book on Ethereum Development with Go (golang)

Ethereum Development with Go A little book on Ethereum Development with Go (golang) Online https://goethereumbook.org E-book The e-book is avaiable in

Miguel Mota 987 Oct 23, 2021
FreeSWITCH Event Socket library for the Go programming language.

eventsocket FreeSWITCH Event Socket library for the Go programming language. It supports both inbound and outbound event socket connections, acting ei

Alexandre Fiori 102 Oct 22, 2021
biogo is a bioinformatics library for Go

bíogo Installation $ go get github.com/biogo/biogo/... Overview bíogo is a bioinformatics library for the Go language. Getting help Help or simil

bíogo 308 Oct 10, 2021
Prometheus instrumentation library for Go applications

Prometheus Go client library This is the Go client library for Prometheus. It has two separate parts, one for instrumenting application code, and one

Prometheus 3.4k Oct 18, 2021