HDFS for Go - This is a native golang client for hdfs.

Overview

HDFS for Go

GoDoc build

This is a native golang client for hdfs. It connects directly to the namenode using the protocol buffers API.

It tries to be idiomatic by aping the stdlib os package, where possible, and implements the interfaces from it, including os.FileInfo and os.PathError.

Here's what it looks like in action:

Abominable are the tumblers into which he pours his poison. ">
client, _ := hdfs.New("namenode:8020")

file, _ := client.Open("/mobydick.txt")

buf := make([]byte, 59)
file.ReadAt(buf, 48847)

fmt.Println(string(buf))
// => Abominable are the tumblers into which he pours his poison.

For complete documentation, check out the Godoc.

The hdfs Binary

Along with the library, this repo contains a commandline client for HDFS. Like the library, its primary aim is to be idiomatic, by enabling your favorite unix verbs:

$ hdfs --help
Usage: hdfs COMMAND
The flags available are a subset of the POSIX ones, but should behave similarly.

Valid commands:
  ls [-lah] [FILE]...
  rm [-rf] FILE...
  mv [-fT] SOURCE... DEST
  mkdir [-p] FILE...
  touch [-amc] FILE...
  chmod [-R] OCTAL-MODE FILE...
  chown [-R] OWNER[:GROUP] FILE...
  cat SOURCE...
  head [-n LINES | -c BYTES] SOURCE...
  tail [-n LINES | -c BYTES] SOURCE...
  du [-sh] FILE...
  checksum FILE...
  get SOURCE [DEST]
  getmerge SOURCE DEST
  put SOURCE DEST

Since it doesn't have to wait for the JVM to start up, it's also a lot faster hadoop -fs:

$ time hadoop fs -ls / > /dev/null

real  0m2.218s
user  0m2.500s
sys 0m0.376s

$ time hdfs ls / > /dev/null

real  0m0.015s
user  0m0.004s
sys 0m0.004s

Best of all, it comes with bash tab completion for paths!

Installing the commandline client

Grab a tarball from the releases page and unzip it wherever you like.

To configure the client, make sure one or both of these environment variables point to your Hadoop configuration (core-site.xml and hdfs-site.xml). On systems with Hadoop installed, they should already be set.

$ export HADOOP_HOME="/etc/hadoop"
$ export HADOOP_CONF_DIR="/etc/hadoop/conf"

To install tab completion globally on linux, copy or link the bash_completion file which comes with the tarball into the right place:

$ ln -sT bash_completion /etc/bash_completion.d/gohdfs

By default on non-kerberized clusters, the HDFS user is set to the currently-logged-in user. You can override this with another environment variable:

$ export HADOOP_USER_NAME=username

Using the commandline client with Kerberos authentication

Like hadoop fs, the commandline client expects a ccache file in the default location: /tmp/krb5cc_. That means it should 'just work' to use kinit:

$ kinit [email protected]
$ hdfs ls /

If that doesn't work, try setting the KRB5CCNAME environment variable to wherever you have the ccache saved.

Compatibility

This library uses "Version 9" of the HDFS protocol, which means it should work with hadoop distributions based on 2.2.x and above. The tests run against CDH 5.x and HDP 2.x.

Acknowledgements

This library is heavily indebted to snakebite.

Issues
  • Support for kerberized hadoop clusters.

    Support for kerberized hadoop clusters.

    Notes:

    • requires a patch to the krb5 lib to avoid setting a subkey when generating the authenticator.
    • only supports a QOP of "authentication". Other modes are not supported.
    opened by Shastick 47
  • add support for data transfer encryption via rc4 and aes

    add support for data transfer encryption via rc4 and aes

    addresses #145

    I've only implemented rc4 encryption here as i haven't figured out 3des / des yet, but this at least solves the use case in my own environment which is nice.

    For references for the implementation I used:

    • https://www.ietf.org/rfc/rfc2831.txt
    • libgsasl
    • libhdfs3

    I was able to test this with my own set up using encrypted data transfer and it works! huzzah!

    opened by zeroshade 37
  • HDFS data transfer encryption support

    HDFS data transfer encryption support

    I was attempting to use this library against an hdfs cluster that has hadoop.rpc.protection setting in core-site.xml set to privacy, as well as dfs.encrypt.data.transfer which is true in hdfs-site.xml. i believe those apply to the protobuf rpc interface.

    the error message i received was the following. no available namenodes: SASL handshake: wrong Token ID. Expected 0504, was 6030

    after some debugging i think it occurs here https://github.com/colinmarc/hdfs/blob/master/internal/rpc/kerberos.go#L67 . i suspect the namenode is replying with an encrypted message, while the doKerberosHandshake() expects otherwise.

    on first look the library just sets the default value for dfs.encrypt.data.transfer property as false https://github.com/colinmarc/hdfs/blob/f87e1d64bc48c85b07cab32d23c97788e885b31b/internal/protocol/hadoop_hdfs/hdfs.proto#L402 and there is no way of creating a client with that property set to true. https://github.com/colinmarc/hdfs/blob/f87e1d64bc48c85b07cab32d23c97788e885b31b/client.go#L122

    there is a fetchDefaults() function, that's only invoked by file_writer, but not by file_reader (e.g. Stat(), Readdir(), and Read() methods).

    can you comment if i'm digging in the right place, and whether the encrypted part of the protocol applies to the read functionality?

    here are relevant properties from core-site.xml

      <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
      </property>
      <property>
        <name>hadoop.security.authorization</name>
        <value>true</value>
      </property>
      <property>
        <name>hadoop.rpc.protection</name>
        <value>privacy</value>
      </property>
    

    and hdfs-site.xml

      <property>
        <name>dfs.encrypt.data.transfer.algorithm</name>
        <value>3des</value>
      </property>
      <property>
        <name>dfs.encrypt.data.transfer.cipher.suites</name>
        <value>AES/CTR/NoPadding</value>
      </property>
      <property>
        <name>dfs.encrypt.data.transfer.cipher.key.bitlength</name>
        <value>256</value>
      </property>
      <property>
        <name>dfs.namenode.acls.enabled</name>
        <value>true</value>
      </property>
    
    opened by mxk1235 17
  • "stat: /someDir: unexpected sequence number"

    opened by netrc 16
  • failed go get -u github.com/colinmarc/hdfs due to v2?

    failed go get -u github.com/colinmarc/hdfs due to v2?

    chris:hdfs chris$ go get -u github.com/colinmarc/hdfs package github.com/colinmarc/hdfs/v2/hadoopconf: cannot find package "github.com/colinmarc/hdfs/v2/hadoopconf" in any of: /usr/local/go/src/github.com/colinmarc/hdfs/v2/hadoopconf (from $GOROOT) /Users/chris/dev/gopath/src/github.com/colinmarc/hdfs/v2/hadoopconf (from $GOPATH) package github.com/colinmarc/hdfs/v2/internal/protocol/hadoop_hdfs: cannot find package "github.com/colinmarc/hdfs/v2/internal/protocol/hadoop_hdfs" in any of: /usr/local/go/src/github.com/colinmarc/hdfs/v2/internal/protocol/hadoop_hdfs (from $GOROOT) /Users/chris/dev/gopath/src/github.com/colinmarc/hdfs/v2/internal/protocol/hadoop_hdfs (from $GOPATH) package github.com/colinmarc/hdfs/v2/internal/rpc: cannot find package "github.com/colinmarc/hdfs/v2/internal/rpc" in any of: /usr/local/go/src/github.com/colinmarc/hdfs/v2/internal/rpc (from $GOROOT) /Users/chris/dev/gopath/src/github.com/colinmarc/hdfs/v2/internal/rpc (from $GOPATH)

    opened by chrislusf 11
  • Kerberos support

    Kerberos support

    This PR contains basic kerberos support, based on the hard work by @Shastick and @staticmukesh in #99.

    I'd love feedback from people who actually use kerberos, especially on the API. The command line client uses the MIT kerberos defaults (and env variables) for krb5.conf and the credential cache; I have no idea if that's idiomatic. It also doesn't support a keytab file. Please speak up if you have an opinion on how this should work!

    opened by colinmarc 11
  • connection string credentials

    connection string credentials

    I am having trouble finding documentation on how to format the connection string to include username and password for the host. could someone please show me an example? or point me to a more complete documentation of this package? there are very few examples of how to use it.

    opened by mrasnake 9
  • Clobber in rename: provide overwrite option in Rename

    Clobber in rename: provide overwrite option in Rename

    Right now, rename api cannot overwrite the existing file, according to https://github.com/colinmarc/hdfs/blob/master/rename.go#L14.

    This patch allows users to decide whether to overwrite files if the files already exist.

    opened by junjieqian 9
  • Implement SASL reader

    Implement SASL reader

    I made RPC writing and reading interface and implemented SASL reader which is supposed to be used when a GSS API server responded with TOKEN auth and the QOP is auth-conf or auth-int.

    I tried all the following patterns of qop config in core-site.xml and all of them work fine with this change.

    • hadoop.rpc.protection : authentication
    • hadoop.rpc.protection : integrity
    • hadoop.rpc.protection : privacy

    I didn't implement SaslRpcWriter as it requires a client to choose QOP and a lot of changes around the command line tool is necessary. I will start to implement it once this PR gets merged.

    Solves: #144

    opened by dtaniwaki 8
  • While appending, use the same generation stamp sent by the namenode

    While appending, use the same generation stamp sent by the namenode

    While trying out the append functionality ran into the issue where the datanode was complaining about generation stamp(gs) mismatch, and looks like gs must be same as sent by namenode. After changing this way the append works fine.

    opened by muralirvce 8
  • Put support

    Put support

    Hey,

    great library! Looks very promising. I can see there is CreateEmptyFile method, but I can't see any other methods to write files to hdfs. Are there plans to add support for that, also for put/copyFromLocal/moveFromLocal/etc. CLI commands?

    Sorry if I missed the note about that.

    Thanks

    opened by grobie 8
  • CompleteRequestProto Not Set FileId Cause FileWriter.Close Error

    CompleteRequestProto Not Set FileId Cause FileWriter.Close Error

    that is failed stack: 截屏2022-08-09 下午8 02 56

    under the circumstances,use "hdfs dfs -ls" show file size is zero,but use "hdfs dfs -head" can normally display file content, and use "hdfs dfs copyToLoacl" ,the downloaded file is normally

    i refer to ‘hdfs dfs’ call sequence , before call 'complete', i call 'getFileInfo' to get 'FileId', and set to CompleteRequestProto, like this:

    func (f *FileWriter) Close() error {
    	var lastBlock *hdfs.ExtendedBlockProto
    	if f.blockWriter != nil {
    		lastBlock = f.blockWriter.Block.GetB()
    
    		// Close the blockWriter, flushing any buffered packets.
    		err := f.finalizeBlock()
    		if err != nil {
    			return err
    		}
    	}
    
    	var err error
    	getFileInfoReq := &hdfs.GetFileInfoRequestProto{Src: proto.String(f.name)}
    	getFileInfoResp := &hdfs.GetFileInfoResponseProto{}
    
    	err = f.client.namenode.Execute("getFileInfo", getFileInfoReq, getFileInfoResp)
    	if err != nil {
    		return &os.PathError{"getFileInfo", f.name, err}
    	}
    
    	completeReq := &hdfs.CompleteRequestProto{
    		Src:        proto.String(f.name),
    		ClientName: proto.String(f.client.namenode.ClientName),
    		Last:       lastBlock,
    		FileId:     getFileInfoResp.Fs.FileId,
    	}
    	completeResp := &hdfs.CompleteResponseProto{}
    
    	err = f.client.namenode.Execute("complete", completeReq, completeResp)
    	if err != nil {
    		return &os.PathError{"create", f.name, err}
    	} else if completeResp.GetResult() == false {
    		return &os.PathError{"create", f.name, ErrReplicating}
    	}
    
    	return nil
    }
    

    after changed code, i try again, successfully fix 'FileWriter.Close Error'

    because of i do not known much about hdfs, please check the cause of the problem.

    opened by 982945902 1
  • CopyToRemote return error: 'proto: cannot parse invalid wire-format data'

    CopyToRemote return error: 'proto: cannot parse invalid wire-format data'

    when I use CopyToRemote method,return error: 'copy err: proto: cannot parse invalid wire-format data', whereas copytoremote.txt is created ok in hadoop, but it has no content, I have no idea what's wrong.

    image

    image

    here is my code:

    package main
    
    import (
            "fmt"
            "os"
    
            "github.com/colinmarc/hdfs/v2"
    )
    
    func main() {
            options := hdfs.ClientOptions{
                    Addresses: []string{"127.0.0.1:9000"},
                    User:      "hadoop",
            }
            client, err := hdfs.NewClient(options)
            if err != nil {
                    fmt.Println("new err:", err)
                    return
            }
    
            var mode = 0777 | os.ModeDir
            err = client.MkdirAll("/_test", mode)
            if err != nil {
                    fmt.Println("mk err:", err)
                    return
            }
    
            err = client.CopyToRemote("./testdata/mobydick.txt", "/_test/copytoremote.txt")
            if err != nil {
                    fmt.Println("cp err:", err)
                    return
            }
            fmt.Println("ok")
    }
    
    opened by baiweiguo 0
  • added a count command

    added a count command

    This adds basic support for the count command:

    https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/FileSystemShell.html#count

    Currently no command-line options supported (other than -h) but I find this adds value to my own use-cases as is.

    opened by gardenia 0
  • Setting ClientOptions.User causes kerberos realm to be left unset

    Setting ClientOptions.User causes kerberos realm to be left unset

    here is my code:

    package main
    
    import (
    	"flag"
    	"io/ioutil"
    	"log"
    	"strings"
    
    	chdfs "github.com/colinmarc/hdfs/v2"        // v2.2.0
    	krb "github.com/jcmturner/gokrb5/v8/client" // v8.4.1
    	"github.com/jcmturner/gokrb5/v8/config"
    	"github.com/jcmturner/gokrb5/v8/keytab"
    )
    
    func main() {
    	username := flag.String("u", "", "user name")
    	principal := flag.String("p", "", "principal")
    	dtp := flag.String("d", "", "data transfer protection")
    	realm := flag.String("r", "", "realm")
    	keyTabFilePath := flag.String("t", "", "key tab file path")
    	krb5CfgFilePaht := flag.String("c", "", "krb5 conf file path")
    	nn := flag.String("n", "", "name node list")
    	flag.Parse()
    
    	opts := chdfs.ClientOptions{
    		User:      *username,
    		Addresses: strings.Split(*nn, ","),
    	}
    	if *keyTabFilePath != "" {
    		// Load the keytab
    		dat, err := ioutil.ReadFile(*keyTabFilePath)
    		if err != nil {
    			log.Fatalf("open %s file error: %v", *keyTabFilePath, err)
    		}
    		kt := keytab.New()
    		if err := kt.Unmarshal(dat); err != nil {
    			log.Fatalf("could not load client keytab: %v", err)
    		}
    		log.Printf("Keytab_file info:\n%s", kt.String())
    		// Load the client krb5 config
    		krb5Cfg, err := ioutil.ReadFile(*krb5CfgFilePaht)
    		if err != nil {
    			log.Fatalf("open %s file error: %v", *krb5CfgFilePaht, err)
    		}
    		conf, err := config.NewFromString(string(krb5Cfg))
    		if err != nil {
    			log.Fatalf("could not load krb5.conf %s: %v", string(krb5Cfg), err)
    		}
    		log.Printf("krb5_config file:\n%s", string(krb5Cfg))
    		// Create the client with the keytab
    		cl := krb.NewWithKeytab(*username, *realm, kt, conf)
    		if err := cl.Login(); err != nil {
    			log.Fatalf("login with user %s error: %v", *username, err)
    		}
    		opts.KerberosServicePrincipleName = *principal
    		opts.KerberosClient = cl
    		opts.DataTransferProtection = *dtp
    	}
    
    	c, err := chdfs.NewClient(opts)
    	if err != nil {
    		log.Fatalf("new hdfs client error: %v", err)
    	}
    	log.Printf("New client %v ok", c)
    	defer func() {
    		if err := c.Close(); err != nil {
    			log.Fatalf("close conn %+v error: %v", opts, err)
    		}
    		log.Fatalf("Close client %v ok", c)
    	}()
    	if _, err := c.ReadDir("/"); err != nil {
    		log.Printf("read / error: %v", err)
    	}
    	if err := c.CreateEmptyFile("/tmp/test123"); err != nil {
    		log.Printf("create empry file /tmp/test123 error: %v", err)
    	}
    	if err := c.Remove("/tmp/test123"); err != nil {
    		log.Printf("remove file /tmp/test123 error: %v", err)
    	}
    }
    
    

    build and exectue the above code:

    go build -o k k.go
    
    ./k -c ./krb5.conf -r TCE.COM -t ./hdfs_hadoop-client.keytab  -n hdfs-10-29-2-18.tcs.internal:9000,hdfs-10-29-2-152.tcs.internal:9000 -p hdfs/_HOST   -d integrity -u hdfs/hadoop-client
    

    then got the following EOF error:

    2022/03/23 17:53:22 read / error: open /: EOF
    2022/03/23 17:53:22 create empry file /tmp/test123 error: create /tmp/test123: EOF                                                                                                                         ok
    2022/03/23 17:53:22 remove file /tmp/test123 error: remove /tmp/test123: EOF
    

    image

    I have no idea what's wrong, it looks like I have been successfully authenticated & authorized, connected to namenodes, but can not access the dfs. The same code works fine for env without kerberos, any tips to help further debugging would be appreciate.

    opened by yuchengwu 6
Releases(v2.3.0)
  • v2.3.0(Feb 11, 2022)

    This release, along with several bugfixes, contains new functionality:

    • Client.Truncate (#73, https://github.com/colinmarc/hdfs/commit/2f114063eda00d5847ff83c07afb9ad04e90f7b4) (thanks @junjieqian!)
    • Client.ServerDefaults (https://github.com/colinmarc/hdfs/commit/039ab59c24316d509005755f772a9a8dd7a27b5d)

    It contains one potentially dangerous behaviour change, in https://github.com/colinmarc/hdfs/commit/b02ab581bd500863b60a6d6718b48854c26acd23: FileWriter.Close now will correctly propagate errors in the situation where the namenode has not yet received all acks from the datanodes. Close returns a specific error in that case, ErrReplicating, which the client can either ignore or use in a retry loop. See also IsErrReplicating, for checking this case.

    Source code(tar.gz)
    Source code(zip)
    gohdfs-v2.3.0-linux-amd64.tar.gz(5.27 MB)
  • v2.2.1(Feb 5, 2022)

  • v2.2.0(Dec 30, 2020)

    This overdue release contains a spate of bugfixes, as well as two major features contributed by others:

    • #236: Data transfer encryption support, for fully kerberized clusters.
    • #238: Extended attributes ("XAttr") support
    Source code(tar.gz)
    Source code(zip)
  • v2.1.1(Dec 6, 2019)

  • v2.1.0(Nov 24, 2019)

    This is a small release. In addition to some bug fixes, it includes a few useful features:

    • #153, #154, #208, 574b0ba: Automatic renewal of file leases and heartbeats for write streams. Both features are necessary for writes to open filesover a long period.
    • #144, #170, 1c841f7: Support for SASL-wrapped RPC communication with the namenode.
    • #205: Support for snapshots.
    • 6f7e441: A new method, RemoveAll, and a fix such that Remove is not recursive. Please be careful to check your usage of said function, as this is a fairly major change to behavior.

    It also officially includes support for CDH6, although it probably worked before.

    Source code(tar.gz)
    Source code(zip)
    gohdfs-v2.1.0-linux-amd64.tar.gz(6.03 MB)
  • v2.0.0(Aug 5, 2018)

    This is a major release, including multiple breaking interface changes and new features.

    The library is now structured as a go module. To use it, use the import path github.com/colinmarc/hdfs/v2.

    Kerberos support

    Added in #133, with lots of help from @Shastick and @staticmukesh. This adds basic kerberos authentication using to the library and the command line client - to the latter with support for ccaches. This much-requested feature should be ready for production use, but I would love your feedback and/or bug reports.

    Timeouts and other useful configuration

    Client now has two new options, NamenodeDialFunc and DatanodeDialFunc, which can be used to replace net.Dial and set timeouts and keepalives and other useful things. You can also use SetDeadline on FileReader and FileWriter to enforce i/o timeouts. See #139 for more information.

    Your hdfs.Client can be made to respect your Hadoop configuration with the hdfs.ClientOptionsFromConf method. This looks for relevant options from the configuration and tries to configure the client to match. While this doesn't do that much right now, it may be expanded to other things in the future.

    Incompatable interface changes

    The rpc package is now internal; keeping the interface backwards compatible was too difficult, and nothing in that subpackage was really useful externally anyway. rpc.NamenodeError which was possibly the only useful export, now implements an interface, hdfs.Error.

    The configuration parsing/loading code and the HadoopConf type have been moved into their own package, hadoopconf. The interfaces are also slightly tweaked.

    Finally, all deprecated methods, such as hdfs.NewForUser, have been removed.

    Source code(tar.gz)
    Source code(zip)
    gohdfs-v2.0.0-darwin-amd64.tar.gz(5.16 MB)
    gohdfs-v2.0.0-linux-amd64.tar.gz(5.22 MB)
  • v1.1.3(Jun 22, 2018)

  • v1.1.2(Jun 22, 2018)

    This is a minor bugfix release, but contains one possibly breaking change.

    As of #123 (change proposed by @hollow), the client will now use hostnames to connect to datanodes (when available), rather than IP addresses. If you experience problems with that, you may have an issue with your DNS configuration.

    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(May 31, 2018)

    This (long-overdue) release contains a host of minor bug fixes, as well as a few new features:

    • #98 - hdfs put - now reads from stdin. (submitted by @Shastick)
    • #101 - client has a new method, Walk, like filepath.Walk. (submitted by @Shastick)
    • #107 (fixed in 84dbd09) - FileWriter now exposes Flush, for syncing data to disk.

    The other notable interface change is the deprecation of NewForUser and NewForConnection in 0f30457.

    Source code(tar.gz)
    Source code(zip)
    gohdfs-v1.1.1-linux-amd64.tar.gz(1.78 MB)
  • v1.1.0(Oct 26, 2017)

  • v1.0.4(Aug 23, 2017)

  • v1.0.3(Mar 13, 2017)

  • v1.0.2(Feb 25, 2017)

  • v1.0.1(Jan 23, 2017)

  • v1.0.0(Jan 23, 2017)

    This is a much-belated major release which includes lots of fixes and changes.

    Major feature additions:

    • #12, #43, #45 and #53 collectively add write support, including Create and Append functions, a FileWriter type, a hdfs put command, and more
    • #41 adds LoadHadoopConf and other tools for accessing the system Hadoop configuration

    Minor fixes and changes:

    • #16 corrects Mkdirall such that it doesn't return an error when the directory already exists
    • #20 fixes ReadAt and Tail to work correctly with short files
    • #55 updates Rename and hdfs mv to overwrite by default
    • #18 fixes the semantics around picking a default user; the env variable HADOOP_USER_NAME is used if present and overrides the OS user.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.4(Mar 1, 2015)

    This is a small release, adding a few new features and fixes:

    • 93e320c - Tail prints to stdout now, instead of stderr
    • 7306883 - Directories created as part of a get or getmerge will be created as 0755, instead of 0644
    • 5ced062 - Adds GetContentSummary, which is an efficient way to get the total disk usage for a directory tree (among other things). This made the du command much faster.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Jan 6, 2015)

    This is a minor release, with a few bug fixes and new features:

    • the addition of ls -h (efbd8f6) and du (0b6bc39)
    • a fix for a minor formatting issue with ls -l (738b7cc)
    • a fix for a bug that could cause ReadDir and friends to loop forever (d4eafeb)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Dec 19, 2014)

    This release, besides containing a few minor fixes, adds the FileReader.Checksum method, as well as a corresponding subcommand to the command-line client (see 3caac15a68e61fa296987daf2f70fdcf6a2edfcd).

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Nov 21, 2014)

    This release includes a few small bug fixes, and, more notably, the ability for the client to fail between datanodes (implemented in 74e36cc).

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Nov 4, 2014)

Owner
Colin Marc
They also serve who only stand and wait.
Colin Marc
redis client implement by golang, inspired by jedis.

godis redis client implement by golang, refers to jedis. this library implements most of redis command, include normal redis command, cluster command,

piaohao 105 Jul 20, 2022
Go Memcached client library #golang

About This is a memcache client library for the Go programming language (http://golang.org/). Installing Using go get $ go get github.com/bradfitz/gom

Brad Fitzpatrick 1.5k Aug 5, 2022
Neo4j REST Client in golang

DEPRECATED! Consider these instead: https://github.com/johnnadratowski/golang-neo4j-bolt-driver https://github.com/go-cq/cq Install: If you don't ha

null 76 Mar 14, 2022
Neo4j client for Golang

neoism - Neo4j client for Go Package neoism is a Go client library providing access to the Neo4j graph database via its REST API. Status System Status

Jason McVetta 388 Aug 11, 2022
Type-safe Redis client for Golang

Redis client for Golang ❤️ Uptrace.dev - distributed traces, logs, and errors in one place Join Discord to ask questions. Documentation Reference Exam

null 15.1k Aug 13, 2022
Type-safe Redis client for Golang

Redis client for Golang ❤️ Uptrace.dev - distributed traces, logs, and errors in one place Join Discord to ask questions. Documentation Reference Exam

null 15.1k Aug 11, 2022
A CouchDB client in Go(Golang)

pillow pillow is a CouchDB client in Go(Golang). Resources Installation Usage Example Installation Install pillow as you normally would for any Go pac

Alex Munene 3 Feb 3, 2022
Redis client for Golang

Redis client for Golang To ask questions, join Discord or use Discussions. Newsl

null 0 Dec 23, 2021
Redis client for Golang

Redis client for Golang Discussions. Newsletter to get latest updates. Documentation Reference Examples RealWorld example app Other projects you may l

null 0 Dec 30, 2021
Aerospike Client Go

Aerospike Go Client An Aerospike library for Go. This library is compatible with Go 1.9+ and supports the following operating systems: Linux, Mac OS X

Aerospike 394 Aug 1, 2022
Couchbase client in Go

A smart client for couchbase in go This is a unoffical version of a Couchbase Golang client. If you are looking for the Offical Couchbase Golang clien

null 316 Jun 26, 2022
Go client library for Pilosa

Go Client for Pilosa Go client for Pilosa high performance distributed index. What's New? See: CHANGELOG Requirements Go 1.12 and higher. Install Down

Pilosa 54 Aug 5, 2022
Neo4j Rest API Client for Go lang

neo4j.go Implementation of client package for communication with Neo4j Rest API. For more information and documentation please read Godoc Neo4j Page s

Cihangir 27 Jan 26, 2022
Go client for Redis

Redigo Redigo is a Go client for the Redis database. Features A Print-like API with support for all Redis commands. Pipelining, including pipelined tr

null 9.2k Aug 9, 2022
Go Redis Client

xredis Built on top of github.com/garyburd/redigo with the idea to simplify creating a Redis client, provide type safe calls and encapsulate the low l

Raed Shomali 18 Jan 23, 2022
godis - an old Redis client for Go

godis Implements a few database clients for Redis. There is a stable client and an experimental client, redis and exp, respectively. To use any of the

Simon Zimmermann 86 Apr 16, 2022
Google Go Client and Connectors for Redis

Go-Redis Go Clients and Connectors for Redis. The initial release provides the interface and implementation supporting the (~) full set of current Red

Joubin Houshyar 440 Jul 9, 2022
Redis client library for Go

go-redis go-redis is a Redis client library for the Go programming language. It's built on the skeleton of gomemcache. It is safe to use by multiple g

Alexandre Fiori 45 Jul 15, 2020
Redis client Mock Provide mock test for redis query

Redis client Mock Provide mock test for redis query, Compatible with github.com/go-redis/redis/v8 Install Confirm that you are using redis.Client the

null 131 Aug 12, 2022