Making it easy to write shell-like scripts in Go

Overview

GoDocGo Report CardMentioned in Awesome GoCircleCI

import github.com/bitfield/script

What is script?

script is a Go library for doing the kind of tasks that shell scripts are good at: reading files, executing subprocesses, counting lines, matching strings, and so on.

Why shouldn't it be as easy to write system administration programs in Go as it is in a typical shell? script aims to make it just that easy.

Shell scripts often compose a sequence of operations on a stream of data (a pipeline). This is how script works, too.

What can I do with it?

Let's see a simple example. Suppose you want to read the contents of a file as a string:

contents, err := script.File("test.txt").String()

That looks straightforward enough, but suppose you now want to count the lines in that file.

numLines, err := script.File("test.txt").CountLines()

For something a bit more challenging, let's try counting the number of lines in the file which match the string "Error":

numErrors, err := script.File("test.txt").Match("Error").CountLines()

But what if, instead of reading a specific file, we want to simply pipe input into this program, and have it output only matching lines (like grep)?

script.Stdin().Match("Error").Stdout()

That was almost too easy! So let's pass in a list of files on the command line, and have our program read them all in sequence and output the matching lines:

script.Args().Concat().Match("Error").Stdout()

Maybe we're only interested in the first 10 matches. No problem:

script.Args().Concat().Match("Error").First(10).Stdout()

What's that? You want to append that output to a file instead of printing it to the terminal? You've got some attitude, mister.

script.Args().Concat().Match("Error").First(10).AppendFile("/var/log/errors.txt")

You can find more example programs (mostly replicating traditional Unix utilities) in the examples/ directory.

Who wrote this?

Not content with maintaining this library, John Arundel, of Bitfield Consulting, is a highly experienced Go trainer and mentor who can teach you Go from scratch, take you beyond the basics, or even help you reach complete mastery of the Go programming language. See Learn Go with Bitfield for details, or email [email protected] to find out more.

John is also the author of the popular For the Love of Go series of ebooks for Go beginners and improvers. Priced at a cheerful $4.99 each, they'll introduce you to the key concepts of Go in a friendly and patient way, and encourage you to start writing useful Go code right from the first page.

Table of contents

How does it work?

Those chained function calls look a bit weird. What's going on there?

One of the neat things about the Unix shell, and its many imitators, is the way you can compose operations into a pipeline:

cat test.txt | grep Error | wc -l

The output from each stage of the pipeline feeds into the next, and you can think of each stage as a filter which passes on only certain parts of its input to its output.

By comparison, writing shell-like scripts in raw Go is much less convenient, because everything you do returns a different data type, and you must (or at least should) check errors following every operation.

In scripts for system administration we often want to compose different operations like this in a quick and convenient way. If an error occurs somewhere along the pipeline, we would like to check this just once at the end, rather than after every operation.

Everything is a pipe

The script library allows us to do this because everything is a pipe (specifically, a script.Pipe). To create a pipe, start with a source like File():

var p script.Pipe
p = script.File("test.txt")

You might expect File() to return an error if there is a problem opening the file, but it doesn't. We will want to call a chain of methods on the result of File(), and it's inconvenient to do that if it also returns an error.

Instead, you can check the error status of the pipe at any time by calling its Error() method:

p = script.File("test.txt")
if p.Error() != nil {
    log.Fatalf("oh no: %v", p.Error())
}

What use is a pipe?

Now, what can you do with this pipe? You can call a method on it:

var q script.Pipe
q = p.Match("Error")

Note that the result of calling a method on a pipe is another pipe. You can do this in one step, for convenience:

var q script.Pipe
q = script.File("test.txt").Match("Error")

Handling errors

Woah, woah! Just a minute! What if there was an error opening the file in the first place? Won't Match blow up if it tries to read from a non-existent file?

No, it won't. As soon as an error status is set on a pipe, all operations on the pipe become no-ops. Any operation which would normally return a new pipe just returns the old pipe unchanged. So you can run as long a pipeline as you want to, and if an error occurs at any stage, nothing will crash, and you can check the error status of the pipe at the end.

(Seasoned Gophers will recognise this as the errWriter pattern described by Rob Pike in the blog post Errors are values.)

Getting output

A pipe is useless if we can't get some output from it. To do this, you can use a sink, such as String():

result, err := q.String()
if err != nil {
    log.Fatalf("oh no: %v", err)
}
fmt.Println(result)

Errors

Note that sinks return an error value in addition to the data. This is the same value you would get by calling p.Error(). If the pipe had an error in any operation along the pipeline, the pipe's error status will be set, and a sink operation which gets output will return the zero value, plus the error.

numLines, err := script.File("doesnt_exist.txt").CountLines()
fmt.Println(numLines)
// Output: 0
if err != nil {
	    log.Fatal(err)
}
// Output: open doesnt_exist.txt: no such file or directory

CountLines() is another useful sink, which simply returns the number of lines read from the pipe.

Closing pipes

If you've dealt with files in Go before, you'll know that you need to close the file once you've finished with it. Otherwise, the program will retain what's called a file handle (the kernel data structure which represents an open file). There is a limit to the total number of open file handles for a given program, and for the system as a whole, so a program which leaks file handles will eventually crash, and will waste resources in the meantime.

Files aren't the only things which need to be closed after reading: so do network connections, HTTP response bodies, and so on.

How does script handle this? Simple. The data source associated with a pipe will be automatically closed once it is read completely. Therefore, calling any sink method which reads the pipe to completion (such as String()) will close its data source. The only case in which you need to call Close() on a pipe is when you don't read from it, or you don't read it to completion.

If the pipe was created from something that doesn't need to be closed, such as a string, then calling Close() simply does nothing.

This is implemented using a type called ReadAutoCloser, which takes an io.Reader and wraps it so that:

  1. it is always safe to close (if it's not a closable resource, it will be wrapped in an ioutil.NopCloser to make it one), and
  2. it is closed automatically once read to completion (specifically, once the Read() call on it returns io.EOF).

It is your responsibility to close a pipe if you do not read it to completion.

Why not just use shell?

It's a fair question. Shell scripts and one-liners are perfectly adequate for building one-off tasks, initialization scripts, and the kind of 'glue code' that holds the internet together. I speak as someone who's spent at least thirty years doing this for a living. But in many ways they're not ideal for important, non-trivial programs:

  • Trying to build portable shell scripts is a nightmare. The exact syntax and options of Unix commands varies from one distribution to another. Although in theory POSIX is a workable common subset of functionality, in practice it's usually precisely the non-POSIX behaviour that you need.

  • Shell scripts are hard to test (though test frameworks have been written, and if you're seriously putting mission-critical shell scripts into production, you should be using them, or reconsidering your technology choices).

  • Shell scripts don't scale. Because there are very limited facilities for logic and abstraction, and because any successful program tends to grow remorselessly over time, shell scripts can become an unreadable mess of special cases and spaghetti code. We've all seen it, if not, indeed, done it.

  • Shell syntax is awkward: quoting, whitespace, and brackets can require a lot of fiddling to get right, and so many characters are magic to the shell (*, ?, > and so on) that this can lead to subtle bugs. Scripts can work fine for years until you suddenly encounter a file whose name contains whitespace, and then everything breaks horribly.

  • Deploying shell scripts obviously requires at least a (sizable) shell binary in addition to the source code, but it usually also requires an unknown and variable number of extra userland programs (cut, grep, head, and friends). If you're building container images, for example, you effectively need to include a whole Unix distribution with your program, which runs to hundreds of megabytes, and is not at all in the spirit of containers.

To be fair to the shell, this kind of thing is not what it was ever intended for. Shell is an interactive job control tool for launching programs, connecting programs together, and to a limited extent, manipulating text. It's not for building portable, scalable, reliable, and elegant programs. That's what Go is for.

Go has a superb testing framework built right into the standard library. It has a superb standard library, and thousands of high-quality third-party packages for just about any functionality you can imagine. It is compiled, so it's fast, and statically typed, so it's reliable. It's efficient and memory-safe. Go programs can be distributed as a single binary. Go scales to enormous projects (Kubernetes, for example).

The script library is implemented entirely in Go, and does not require any userland programs (or any other dependencies) to be present. Thus you can build your script program as a container image containing a single (very small) binary, which is quick to build, quick to upload, quick to deploy, quick to run, and economical with resources.

If you've ever struggled to get a shell script containing a simple if statement to work (and who hasn't?), then the script library is dedicated to you.

A real-world example

Let's use script to write a program which system administrators might actually need. One thing I often find myself doing is counting the most frequent visitors to a website over a given period of time. Given an Apache log in the Common Log Format like this:

212.205.21.11 - - [30/Jun/2019:17:06:15 +0000] "GET / HTTP/1.1" 200 2028 "https://example.com/ "Mozilla/5.0 (Linux; Android 8.0.0; FIG-LX1 Build/HUAWEIFIG-LX1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.156 Mobile Safari/537.36"

we would like to extract the visitor's IP address (the first column in the logfile), and count the number of times this IP address occurs in the file. Finally, we might like to list the top 10 visitors by frequency. In a shell script we might do something like:

cut -d' ' -f 1 access.log |sort |uniq -c |sort -rn |head

There's a lot going on there, and it's pleasing to find that the equivalent script program is quite brief:

package main

import (
	"github.com/bitfield/script"
)

func main() {
	script.Stdin().Column(1).Freq().First(10).Stdout()
}

(Thanks to Lucas Bremgartner for suggesting this example. You can find the complete program, along with a sample logfile, in the examples/visitors/ directory.)

Quick start: Unix equivalents

If you're already familiar with shell scripting and the Unix toolset, here is a rough guide to the equivalent script operation for each listed Unix command.

Unix / shell script equivalent
(any program name) Exec()
[ -f FILE ] IfExists()
> WriteFile()
>> AppendFile()
$* Args()
basename Basename()
cat File() / Concat()
cut Column()
dirname Dirname()
echo Echo()
grep Match() / MatchRegexp()
grep -v Reject() / RejectRegexp()
head First()
find -type f FindFiles
ls ListFiles()
sed Replace() / ReplaceRegexp()
sha256sum SHA256Sum() / SHA256Sums()
tail Last()
uniq -c Freq()
wc -l CountLines()
xargs ExecForEach()

Sources, filters, and sinks

script provides three types of pipe operations: sources, filters, and sinks.

  1. Sources create pipes from input in some way (for example, File() opens a file).
  2. Filters read from a pipe and filter the data in some way (for example Match() passes on only lines which contain a given string).
  3. Sinks get the output from a pipeline in some useful form (for example String() returns the contents of the pipe as a string), along with any error status.

Let's look at the source, filter, and sink options that script provides.

Sources

These are operations which create a pipe.

Args

Args() creates a pipe containing the program's command-line arguments, one per line.

p := script.Args()
output, err := p.String()
fmt.Println(output)
// Output: command-line arguments

Echo

Echo() creates a pipe containing a given string:

p := script.Echo("Hello, world!")
output, err := p.String()
fmt.Println(output)
// Output: Hello, world!

Exec

Exec() runs a given command and creates a pipe containing its combined output (stdout and stderr). If there was an error running the command, the pipe's error status will be set.

p := script.Exec("bash -c 'echo hello'")
output, err := p.String()
fmt.Println(output)
// Output: hello

Note that Exec() can also be used as a filter, in which case the given command will read from the pipe as its standard input.

Exit status

If the command returns a non-zero exit status, the pipe's error status will be set to the string "exit status X", where X is the integer exit status.

p := script.Exec("ls doesntexist")
output, err := p.String()
fmt.Println(err)
// Output: exit status 1

For convenience, you can get this value directly as an integer by calling ExitStatus() on the pipe:

p := script.Exec("ls doesntexist")
var exit int = p.ExitStatus()
fmt.Println(exit)
// Output: 1

The value of ExitStatus() will be zero unless the pipe's error status matches the string "exit status X", where X is a non-zero integer.

Error output

Even in the event of a non-zero exit status, the command's output will still be available in the pipe. This is often helpful for debugging. However, because String() is a no-op if the pipe's error status is set, if you want output you will need to reset the error status before calling String():

p := Exec("man bogus")
p.SetError(nil)
output, err := p.String()
fmt.Println(output)
// Output: No manual entry for bogus

File

File() creates a pipe that reads from a file.

p = script.File("test.txt")
output, err := p.String()
fmt.Println(output)
// Output: contents of file

IfExists

IfExists() tests whether the specified file exists. If so, the returned pipe will have no error status. If it doesn't exist, the returned pipe will have an appropriate error set.

p = script.IfExists("doesntexist.txt")
output, err := p.String()
fmt.Println(err)
// Output: stat doesntexist.txt: no such file or directory

This can be used to create pipes which take some action only if a certain file exists:

script.IfExists("/foo/bar").Exec("/usr/bin/yada")

FindFiles

FindFiles() lists all files in a directory and its subdirectories recursively, like Unix find -type f.

script.FindFiles("/tmp").Stdout()
// lists all files in /tmp and its subtrees

ListFiles

ListFiles() lists files, like Unix ls. It creates a pipe containing all files and directories matching the supplied path specification, one per line. This can be the name of a directory (/path/to/dir), the name of a file (/path/to/file), or a glob (wildcard expression) conforming to the syntax accepted by filepath.Match() (/path/to/*).

p := script.ListFiles("/tmp/*.php")
files, err := p.String()
if err != nil {
	log.Fatal(err)
}
fmt.Println("found suspicious PHP files in /tmp:")
fmt.Println(files)

Slice

Slice() creates a pipe from a slice of strings, one per line.

p := script.Slice([]string{"1", "2", "3"})
output, err := p.String()
fmt.Println(output)
// Output:
// 1
// 2
// 3

Stdin

Stdin() creates a pipe which reads from the program's standard input.

p := script.Stdin()
output, err := p.String()
fmt.Println(output)
// Output: [contents of standard input]

Filters

Filters are operations on an existing pipe that also return a pipe, allowing you to chain filters indefinitely.

Basename

Basename() reads a list of filepaths from the pipe, one per line, and removes any leading directory components from each line (so, for example, /usr/local/bin/foo would become just foo). This is the complement of Dirname.

If a line is empty, Basename() will produce a single dot: .. Trailing slashes are removed.

Examples:

Input Basename output
.
/ .
/root root
/tmp/example.php example.php
/var/tmp/ tmp
./src/filters filters
C:/Program Files Program Files

Column

Column() reads input tabulated by whitespace, and outputs only the Nth column of each input line (like Unix cut). Lines containing less than N columns will be ignored.

For example, given this input:

  PID   TT  STAT      TIME COMMAND
    1   ??  Ss   873:17.62 /sbin/launchd
   50   ??  Ss    13:18.13 /usr/libexec/UserEventAgent (System)
   51   ??  Ss    22:56.75 /usr/sbin/syslogd

and this program:

script.Stdin().Column(1).Stdout()

this will be the output:

PID
1
50
51

Concat

Concat() reads a list of filenames from the pipe, one per line, and creates a pipe which concatenates the contents of those files. For example, if you have files a, b, and c:

output, err := Echo("a\nb\nc\n").Concat().String()
fmt.Println(output)
// Output: contents of a, followed by contents of b, followed
// by contents of c

This makes it convenient to write programs which take a list of input files on the command line, for example:

func main() {
	script.Args().Concat().Stdout()
}

The list of files could also come from a file:

// Read all files in filelist.txt
p := File("filelist.txt").Concat()

...or from the output of a command:

// Print all config files to the terminal.
p := Exec("ls /var/app/config/").Concat().Stdout()

Each input file will be closed once it has been fully read. If any of the files can't be opened or read, Concat() will simply skip these and carry on, without setting the pipe's error status. This mimics the behaviour of Unix cat.

Dirname

Dirname() reads a list of pathnames from the pipe, one per line, and returns a pipe which contains only the parent directories of each pathname (so, for example, /usr/local/bin/foo would become just /usr/local/bin). This is the complement of Basename.

If a line is empty, Dirname() will convert it to a single dot: . (this is the behaviour of Unix dirname and the Go standard library's filepath.Dir).

Trailing slashes are removed, unless Dirname() returns the root folder.

Examples:

Input Dirname output
.
/ /
/root /
/tmp/example.php /tmp
/var/tmp/ /var
./src/filters ./src
C:/Program Files C:

EachLine

EachLine() lets you create custom filters. You provide a function, and it will be called once for each line of input. If you want to produce output, your function can write to a supplied strings.Builder. The return value from EachLine is a pipe containing your output.

p := script.File("test.txt")
q := p.EachLine(func(line string, out *strings.Builder) {
	out.WriteString("> " + line + "\n")
})
output, err := q.String()
fmt.Println(output)

Exec

Exec() runs a given command, which will read from the pipe as its standard input, and returns a pipe containing the command's combined output (stdout and stderr). If there was an error running the command, the pipe's error status will be set.

Apart from connecting the pipe to the command's standard input, the behaviour of an Exec() filter is the same as that of an Exec() source.

// `cat` copies its standard input to its standard output.
p := script.Echo("hello world").Exec("cat")
output, err := p.String()
fmt.Println(output)
// Output: hello world

ExecForEach

ExecForEach runs the supplied command once for each line of input, and returns a pipe containing the output, like Unix xargs.

The command string is interpreted as a Go template, so {{.}} will be replaced with the input value, for example.

The first command which results in an error will set the pipe's error status accordingly, and no subsequent commands will be run.

// Execute all PHP files in current directory and print output
script.ListFiles("*.php").ExecForEach("php {{.}}").Stdout()

First

First() reads its input and passes on the first N lines of it (like Unix head):

script.Stdin().First(10).Stdout()

Freq

Freq() counts the frequencies of input lines, and outputs only the unique lines in the input, each prefixed with a count of its frequency, in descending order of frequency (that is, most frequent lines first). Lines with the same frequency will be sorted alphabetically. For example, given this input:

banana
apple
orange
apple
banana

and a program like:

script.Stdin().Freq().Stdout()

the output will be:

2 apple
2 banana
1 orange

This is a common pattern in shell scripts to find the most frequently-occurring lines in a file:

sort testdata/freq.input.txt |uniq -c |sort -rn

Freq()'s behaviour is like the combination of Unix sort, uniq -c, and sort -rn used here. You can use Freq() in combination with First() to get, for example, the ten most common lines in a file:

script.Stdin().Freq().First(10).Stdout()

Like uniq -c, Freq() left-pads its count values if necessary to make them easier to read:

10 apple
 4 banana
 2 orange
 1 kumquat

Join

Join() reads its input and replaces newlines with spaces, preserving a terminating newline if there is one.

p := script.Echo("hello\nworld\n").Join()
output, err := p.String()
fmt.Println(output)
// Output: hello world\n

Last

Last() reads its input and passes on the last N lines of it (like Unix tail):

script.Stdin().Last(10).Stdout()

Match

Match() returns a pipe containing only the input lines which match the supplied string:

p := script.File("test.txt").Match("Error")

MatchRegexp

MatchRegexp() is like Match(), but takes a compiled regular expression instead of a string.

p := script.File("test.txt").MatchRegexp(regexp.MustCompile(`E.*r`))

Reject

Reject() is the inverse of Match(). Its pipe produces only lines which don't contain the given string:

p := script.File("test.txt").Match("Error").Reject("false alarm")

RejectRegexp

RejectRegexp() is like Reject(), but takes a compiled regular expression instead of a string.

p := script.File("test.txt").Match("Error").RejectRegexp(regexp.MustCompile(`false|bogus`))

Replace

Replace() returns a pipe which filters its input by replacing all occurrences of one string with another, like Unix sed:

p := script.File("test.txt").Replace("old", "new")

ReplaceRegexp

ReplaceRegexp() returns a pipe which filters its input by replacing all matches of a compiled regular expression with a supplied replacement string, like Unix sed:

p := script.File("test.txt").ReplaceRegexp(regexp.MustCompile("Gol[a-z]{1}ng"), "Go")

SHA256Sums

SHA256Sums() reads a list of file paths from the pipe, one per line, and returns a pipe which contains the SHA-256 checksum of each file. If there are any errors (for example, non-existent files), the pipe's error status will be set to the first error encountered, but execution will continue.

Examples:

Input SHA256Sums output
testdata/sha256Sum.input.txt 1870478d23b0b4db37735d917f4f0ff9393dd3e52d8b0efa852ab85536ddad8e
testdata/multiple_files/1.txt
testdata/multiple_files/2.txt
testdata/multiple_files/3.tar.gz
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

Sinks

Sinks are operations which return some data from a pipe, ending the pipeline.

AppendFile

AppendFile() is like WriteFile(), but appends to the destination file instead of overwriting it. It returns the number of bytes written, or an error:

var wrote int
wrote, err := script.Echo("Got this far!").AppendFile("logfile.txt")

Bytes

Bytes() returns the contents of the pipe as a slice of byte, plus an error:

var data []byte
data, err := script.File("test.bin").Bytes()

CountLines

CountLines(), as the name suggests, counts lines in its input, and returns the number of lines as an integer, plus an error:

var numLines int
numLines, err := script.File("test.txt").CountLines()

Read

Read() behaves just like the standard Read() method on any io.Reader:

buf := make([]byte, 256)
n, err := r.Read(buf)

Because a Pipe is an io.Reader, you can use it anywhere you would use a file, network connection, and so on. You can pass it to ioutil.ReadAll, io.Copy, json.NewDecoder, and anything else which takes an io.Reader.

Unlike most sinks, Read() does not read the whole contents of the pipe (unless the supplied buffer is big enough to hold them).

SHA256Sum

SHA256Sum(), as the name suggests, returns the SHA256 checksum of the file as a hexadecimal number stored in a string, plus an error:

var sha256Sum string
sha256Sum, err := script.File("test.txt").SHA256Sum()

Why not MD5?

MD5 is insecure.

Slice

Slice() returns the contents of the pipe as a slice of strings, one element per line, plus an error. An empty pipe will produce an empty slice. A pipe containing a single empty line (that is, a single \n character) will produce a slice of one element which is the empty string.

args, err := script.Args().Slice()
for _, a := range args {
	fmt.Println(a)
}

Stdout

Stdout() writes the contents of the pipe to the program's standard output. It returns the number of bytes written, or an error:

p := Echo("hello world")
wrote, err := p.Stdout()

In conjunction with Stdin(), Stdout() is useful for writing programs which filter input. For example, here is a program which simply copies its input to its output, like cat:

func main() {
	script.Stdin().Stdout()
}

To filter only lines matching a string:

func main() {
	script.Stdin().Match("hello").Stdout()
}

String

String() returns the contents of the pipe as a string, plus an error:

contents, err := script.File("test.txt").String()

Note that String(), like all sinks, consumes the complete output of the pipe, which closes the input reader automatically. Therefore, calling String() (or any other sink method) again on the same pipe will return an error:

p := script.File("test.txt")
_, _ = p.String()
_, err := p.String()
fmt.Println(err)
// Output: read test.txt: file already closed

WriteFile

WriteFile() writes the contents of the pipe to a named file. It returns the number of bytes written, or an error:

var wrote int
wrote, err := script.File("source.txt").WriteFile("destination.txt")

Examples

Since script is designed to help you write system administration programs, a few simple examples of such programs are included in the examples directory:

More examples would be welcome!

If you use script for real work (or, for that matter, real play), I'm always very interested to hear about it. Drop me a line to [email protected] and tell me how you're using script and what you think of it!

Video tutorial

The script library was covered in a recent episode of Go Code Club, where each week John and friends read and discuss a different Go package and explain it to one another.

Watch the episode here: Code Club: Script

How can I contribute?

See the contributor's guide for some helpful tips.

Comments
  • Get user input

    Get user input

    It's common when writing scripts that install or configure things to need input from the user (at a minimum, something like 'Press Enter to continue'; at a maximum, to be able to prompt the user for input, with an optional default value, and return that value).

    Let's use this issue to design how that would look, and I invite suggestions!

    enhancement help wanted good first issue 
    opened by bitfield 21
  • Try Yaegi

    Try Yaegi

    This could be a good companion for script: https://github.com/containous/yaegi

    If it works, add some documentation to the README on how to use it, with examples.

    help wanted good first issue 
    opened by bitfield 18
  • Filters sort and uniq

    Filters sort and uniq

    First of all, this is a really interesting package, thanks for the effort.

    In my scripts I often use the shell commands sort and uniq so I feel these two would be a good additions for this package.

    opened by breml 18
  • Nice way to handle env variable?

    Nice way to handle env variable?

    Some methods to set environment varialbes for single command, instead of this:

    os.Setenv("XB_AUTHOR_DATE", dateStr)
    os.Setenv("XB_COMMITTER_DATE", dateStr)
    

    Maybe in old shell way:

    XB_AUTHOR_DATE="2021-04-05" XB_COMMITTER_DATE="2021-04-05" xb_push xxxx
    
    opened by Prizrako 15
  • Execute with an argument list

    Execute with an argument list

    What is Exec doing under the hood? Is it invoking a shell, or is it parsing the string itself? And how can we use an explicit argument list and not a string?

    Generally, I think a major shortcoming of the readme is that it does not clarify the security considerations of using this library. Shell is very insecure, and one of the major reasons not to use it is to have better security.

    Update: I now see this is kind of a duplicate: https://github.com/bitfield/script/issues/32

    opened by NightMachinery 15
  • [Feature Request] Tee

    [Feature Request] Tee

    It would be cool to have a way to write a buffer to multiple sinks, in cases where you want to intercept parts of the incoming data but still pipe the rest for other matches...

    script.Stdin().Match("error").Tee(script.File("all_errors")).Match("foo").Stdout()
    
    cat logs | grep error | tee all_errors | grep foo 
    

    In this scenario Tee writes to the File whilst still piping the content to the next item on the pipe Match that eventually pipes to stdout or any other sink.

    @bitfield WDYT?

    Something like:

    fun (*p Pipe) Tee(io.Writer) *p { ... }
    // or 
    fun (*p Pipe) Tee(p Pipe) *p { ... }
    
    opened by marceloboeira 14
  • Add a jq filter

    Add a jq filter

    Modern shell scripts interact often with a REST API or a command line program producing json (kubectl for example). Slicing and dicing the data returned by those APIs is often done with https://stedolan.github.io/jq/.

    https://github.com/itchyny/gojq provides a go native implementation of jq.

    I propose the following filter:

    func (p *Pipe) JQ(query string) *Pipe
    

    An example script:

    script.Echo('{"foo": 128}'').JQ(".foo").Stdout()
    

    should return "128" on stdout.

    Or to show the IP addresses of the local interface lo a shell user will write:

    ip -j a show  | jq '.[] | select(.ifname=="lo") | .addr_info[].local'
    

    With an jq filter in script we can write:

     script.Exec("ip -j a show").JQ('.[] | select(.ifname=="lo") | .addr_info[].local').Stdout)
    

    The same slicing and dicing of data can be done in pure go code, but the jq language provides a well documented and known DSL for this problem.

    opened by hikhvar 13
  • Provide HTTP sink and source

    Provide HTTP sink and source

    In todays scripting we often have to interact with HTTP APIs. Many APIs provide native Go SDKs, but there are lesser known APIs around without a proper SDK. We should provide a facility to provide similar HTTP capabilities to curl into the go scripting environment. Rolling a own HTTP client in go is not that hard, but it requires a lot of repetitive error checking. Today I wrote such a small script with github.com/bitfield/script and I missed build in HTTP capabilities alot.

    I have seen https://github.com/bitfield/script/pull/3. This PR only implements a GET function. Interacting even with the simplest REST APIs requires us to use other methods like POST or PUT. Many APIs requires us to set custom headers.

    Therefore I propose the following sources:

    // HTTPClient is an interface to allow the user to plugin alternative HTTP clients into the source.
    // The HTTPClient interface is a subset of the methods provided by the http.Client
    // We use an own interface with a minimal surface to allow make it easy to implement own customized clients.
    // Customized clients are required to implement features like oauth2 or TLS client certificate authentication.
    type HTTPClient interface {
    	Do(r *http.Request) (*http.Response, error)
    }
    
    // HTTP executes the given http request with the default HTTP client from the http package. The response of that request,
    // is processed by `process`. If process is nil, the default process function copies the body from the request to the pipe.
    func HTTP(req *http.Request, process func(*http.Response) (io.Reader, error)) *Pipe {
    	return HTTPWithClient(http.DefaultClient, req, process)
    }
    
    // HTTP executes the given http request with the given HTTPClient. The response of that request,
    // is processed by `process`. If process is nil, the default process function copies the body from the request to the pipe.
    func HTTPWithClient(client HTTPClient, req *http.Request, process func(*http.Response) (io.Reader, error)) *Pipe {
    }
    

    And the following sink:

    // HTTPRequest creates a new HTTP request with the body set to the content of the pipe.
    func (p *Pipe) HTTPRequest(method string, url string) (*http.Request, error) 
    

    This allows us to simply post the content of files to remote URL like this:

    func main() {
    	req, err := script.Args().Concat().HTTPRequest(http.MethodPost, "https://httpbin.org/post")
    	if err != nil {
    		log.Fatal(err)
    	}
    	script.HTTP(req, nil).Stdout()
    }
    

    This is the equivalent of posting the contents of files via curl to a POST endpoint.

    I explored this design in the PR #72. I'm open for feedback.

    opened by hikhvar 13
  • calc SHA-256 hash of file

    calc SHA-256 hash of file

    Add a CheckSum() sinks func that will return the checksum of a file. (https://github.com/bitfield/script/issues/39).

    The checksum is a SHA-256 hash of the file. The result is a string to be readable.

    opened by thomaspoignant 12
  • Stream pipes

    Stream pipes

    I really like the idea of this library and it might makes a lot of scripting tasks easy!

    The current implementation read all output of each command into memory, and then passes it to the next command and so forth.

    For example:

    It would be nice to avoid it if it is not necessery. Let each reader read as much as needed from previous stage and output whatever necessary to the next stage. This is also how pipes in bash works.

    opened by posener 12
  • Execute a single command where single quotes are needed.

    Execute a single command where single quotes are needed.

    This is in response to issue #32 where the user wanted to enter a command where they specified the shell.

    eg. bash -c 'echo testing'.

    This fix will not handle a command with trailing && at this time but after looking at the readme it seems that may not be the proper use case anyway.

    eg. bash -c 'echo testing' && echo again

    opened by jrswab 12
  • String method return a new line when followed by Exec

    String method return a new line when followed by Exec

    Calling String followed by Exec returns a string ending with a new line (i.e. \n).

    Example:

    out, err := script.Exec("echo hello").String()
    if err != nil {
    	log.Fatal(err)
    }
    
    fmt.Printf("%q\n", out)
    

    It prints: "hello\n"

    Is this intended or the \n shouldn't be there?

    If I'm not mistaken the issue is that os/exec.Command returns it. Execute this example

    package main
    
      import (
        "bytes"
        "fmt"
        "log"
        "os/exec"
      )
    
      func main() {
        out := &bytes.Buffer{}
    
        cmd := exec.Command("echo", "hello")
    
        cmd.Stdout = out
        err := cmd.Start()
        if err != nil {
          log.Fatal(err)
        }
    
        cmd.Wait()
    
        fmt.Printf("%q\n", out.String())
      }
    

    And you see that it prints: "hello\n"

    opened by ifraixedes 2
  • Exec as argument of next Exec calls

    Exec as argument of next Exec calls

    Writing #159 I thought that it could be good to have a way to use the stdout of a command executed with Exec as parameter of a following Exec call.

    For being even more useful, it could be possible to support several Exec calls. And to even go further (sorry, I'm blowing my mind), they could also be used as values to replace on a specific format string.

    NOTE I'm not familiar with the script implementation, so I don't know if this is feasible, or if it's hard to implement or not.

    Let me write some example of an hypothetical API for each of the above wacky ideas.

    Right now, script allows to do this

    script.Exec("echo hello world").Exec("cut -d ' ' -f 1").Output()
    

    This translate to the shell script

    echo hello world | cut -d ' ' -f 1
    

    I'm exposing an example with shell script that I think that I cannot translate using script to a pipe execution as before; each numbered item relates, in the same order, to the three things that I exposed at the beginning of this issue. For keeping the examples concise, they are ilustrative examples rather than a specific use case that I'm having.

    1.  ls $(lsb_release -cs)
      
    2.  ls $(lsb_release -cs) $(dpkg --print-architecture)
      
    3.  echo  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
      

    In script isn't currently possible without having to execute each of them speartelly. I'm going to show how script could offer it with some unexisting methods without stating if their names are good nor if it's a good idea to have them.

    1. There should be somehow to indicates that the output of the previous Exec is passed as an argument on the following Exec call.
    script.Exec("lsb_release -cs").AsArg().Exec("ls").Output()
    
    1. There should be somehow as 1 but with the possibility to pipe several Exec calls.
    script.Exec("lsb_release -cs").AsArg().Exec("dpkg --print-architecture").AsArg().Exec("ls").Output()
    
    1. There should be somehow to indicates that the outputs are kept as an internal value that can be referenced in the formatter.
    script.Exec("lsb_release -cs").AsFmt().Exec("dpk --print-architecture").AsFmt().Exec(`echo  "deb [[email protected]{1} signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian @{2} stable"`).Output()
    

    In this case I used echo inside of an Exec, but it could be extended to each source functions that receive a string as a parameter, hence it could be also done as

    script.Exec("lsb_release -cs").AsFmt().Exec("dpk --print-architecture").AsFmt().Echo("deb [[email protected]{1} signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian @{2} stable").Output()
    
    opened by ifraixedes 2
  • Allow formatted strings to all the sources functions that receive a string

    Allow formatted strings to all the sources functions that receive a string

    I wondered if all the source functions that receive a string as a parameter could receive a string format and a variadic string parameters as the fmt standard Go package does.

    My use case was the following

    echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    

    To do that with script you have to execute each subcommand part and do the string interpolation manually. NOTE this example doesn't illustrate how to handle errors properly to keep it concise.

    arch, err := script.Exec("dpkg --print-architecture").String()
    if err != nil {
    	log.Fatal(err)
    }
    
    versionName, err := script.Exec("lsb_release -cs").String()
    if err != nil {
    	log.Fatal(err)
    }
    
    _, err := exec.Echo(fmt.Sprintf("deb [arch=%s signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian  %s stable", arch, versionName)).Exec("sudo tee /etc/apt/sources.list.d/docker.list").String()
    if err != nil {
    	log.Fatal(err)
    }
    

    If Echo would accept the parameters as fmt.Sprintf then the last line would be:

    _, err := exec.Echo("deb [arch=%s signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian  %s stable", arch, versionName).Exec("sudo tee /etc/apt/sources.list.d/docker.list").String()
    if err != nil {
    	log.Fatal(err)
    }
    

    It doesn't reduce that much the code, but I thought that it shouldn't hurt and in the end, the variadic strings are optional.

    My use case was for only the Echo function, but I thought that the other source functions could benefit from the same too.

    opened by ifraixedes 2
  • MustString, MustStdout, etc

    MustString, MustStdout, etc

    While using script, I think that it would make stuff easier if we get MustString() MustStdout() and similar constructs. In prototyping, my team and I are using a simple must.OK(), must.OKOne() package to keep the code terse. Currently, we end up writing a lot of:

    goVersion := must.OkOne(script.Get("https://go.dev/VERSION?m=text").String())
    

    Which would be much more elegant as:

    goVersion := script.Get("https://go.dev/VERSION?m=text").MustString()
    

    The same applies to the other sinks.

    opened by oderwat 3
  • Unable to automatically enter password through pipeline

    Unable to automatically enter password through pipeline

    `[email protected]:~/LinuxData/git/golang/dmeo$ cat demos.go package main

    import ( "gitee.com/liumou_site/logger" "github.com/bitfield/script" )

    func main() { // file := "/tts" pd := "1" c := "sudo -S rm -f /testing" res, err := script.Args().Echo(pd).ExecForEach(c).Stdout() logger.Debug("Exit code: ", res) if err != nil { logger.Error("Delete failed: ", err) } else { logger.Info("Deletion succeeded") } } [email protected]:~/LinuxData/git/golang/dmeo$ go run demos.go 请输入密码 sudo: no password was provided exit status 1 2022-12-09 00:10:59 [DEBG] [/home/liumou/LinuxData/git/golang/dmeo/demos.go:13] Exit code: 61 2022-12-09 00:10:59 [INFO] [/home/liumou/LinuxData/git/golang/dmeo/demos.go:17] Deletion succeeded [email protected]:~/LinuxData/git/golang/dmeo$ cat demo.sh #!/bin/bash file = "/tts" password = "demo" cmd = "rm -f ${file}" echo $password | sudo -S $cmd ` How do I handle the shell script function later

    opened by liumou-site 1
Owner
John Arundel
Go writer and mentor. Author, 'For the Love of Go'. Say hello at [email protected]
John Arundel
Write controller-runtime based k8s controllers that read/write to git, not k8s

Git Backed Controller The basic idea is to write a k8s controller that runs against git and not k8s apiserver. So the controller is reading and writin

Darren Shepherd 50 Dec 10, 2021
Execute multiple shell commands like Docker-Compose

parx parx is a simple tool to run multiple commands in parallel while having the output structured like Docker Compose does that. This is useful when

Tobias B. 8 Aug 15, 2022
Prevent Kubernetes misconfigurations from ever making it (again 😤) to production! The CLI integration provides policy enforcement solution to run automatic checks for rule violations. Docs: https://hub.datree.io

What is Datree? Datree helps to prevent Kubernetes misconfigurations from ever making it to production. The CLI integration can be used locally or in

datree.io 6.1k Jan 1, 2023
Terraform utility provider for constructing bash scripts that use data from a Terraform module

Terraform Bash Provider This is a Terraform utility provider which aims to robustly generate Bash scripts which refer to data that originated in Terra

Martin Atkins 33 Sep 6, 2022
preflight helps you verify scripts and executables to mitigate chain of supply attacks such as the recent Codecov hack.

?? Mitigate chain of supply attacks ?? Verify your curl scripts and executables ?? What is it? preflight helps you verify scripts and executables to m

null 140 Nov 18, 2022
crud is a cobra based CLI utility which helps in scaffolding a simple go based micro-service along with build scripts, api documentation, micro-service documentation and k8s deployment manifests

crud crud is a CLI utility which helps in scaffolding a simple go based micro-service along with build scripts, api documentation, micro-service docum

Piyush Jajoo 0 Nov 29, 2021
Use this program to embed sh scripts in binaries

sh2bin Use this program to embed sh scripts in binaries. Feel free to fork this

Carlos Polop 18 Jan 4, 2023
Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments.

Apollo Linux provisioning scripts + application deployment tools. Suitable for self-hosting and hobby-scale application deployments. Philosophy Linux-

K T Corp. 1 Feb 7, 2022
A plugin for argo which behaves like I'd like

argocd-lovely-plugin An ArgoCD plugin to perform various manipulations in a sensible order to ultimately output YAML for Argo CD to put into your clus

null 120 Dec 27, 2022
Shell script to download and set GO environmental paths to allow multiple versions.

gobrew gobrew lets you easily switch between multiple versions of go. It is based on rbenv and pyenv. Installation The automatic installer You can ins

Nick Robinson 193 Nov 3, 2022
webhook is a lightweight incoming webhook server to run shell commands

What is webhook? webhook is a lightweight configurable tool written in Go, that allows you to easily create HTTP endpoints (hooks) on your server, whi

Adnan Hajdarević 8.5k Jan 5, 2023
Go version manager. Super simple tool to install and manage Go versions. Install go without root. Gobrew doesn't require shell rehash.

gobrew Go version manager Install or update With curl $ curl -sLk https://git.io/gobrew | sh - or with go $ go get -u github.com/kevincobain2000/gobre

Pulkit Kathuria 180 Jan 5, 2023
Censors or hides shell / Bash / console output based on defined patterns - great for hiding secrets in demos!

censor-shell Installation go install Usage Make the file ~/.censor-shell as an INI file with the following content: [nameofmyreplacement] pattern = b

Ian Mckay 40 Nov 11, 2022
Github Cloud Shell

GitHub Shell ghsh (or Github shell) is a command line tool available for windows, linux and macos that lets you use github as a shell. It is not anoth

Soubik Bhui 4 May 11, 2022
Witty - An unsafe web server to export shell to web

WiTTY: Web-based interactive TTY This program allows you to use terminal in the

null 68 Dec 30, 2022
`runenv` create gcloud run deploy `--set-env-vars=` option and export shell environment from yaml file.

runenv runenv create gcloud run deploy --set-env-vars= option and export shell environment from yaml file. Motivation I want to manage Cloud Run envir

sonatard 0 Feb 10, 2022
A set of tests to check compliance with the Prometheus Remote Write specification

Prometheus Remote Write Compliance Test This repo contains a set of tests to check compliance with the Prometheus Remote Write specification. The test

Tom Wilkie 103 Dec 4, 2022
Write personal metadata to a global location

makemine Write user information into a global location for desktop linux computers. Desktop owner information is often baked into parts of the desktop

Nate Marks 0 Dec 4, 2021
used Terratest to write a test in GO for validating a Terraform module.

Terraform--Terragrant--Test used Terratest to write a test in GO for validating a Terraform module. will write a test for a Terraform module using Ter

EngineerAdnan-DEVOPS 0 Dec 4, 2021