You had one job, or more then one, which can be done in steps

Overview

Leprechaun

Codacy Badge Go Report Card Build Status codecov

Leprechaun is tool where you can schedule your recurring tasks to be performed over and over.

In Leprechaun tasks are recipes, lets observe simple recipe file which is written using YAML syntax.

File is located in recipes directory which can be specified in configs.ini configurational file. For all possible settings take a look here

By definition there are 3 types of recipes, the ones that can be scheduled, the others that can be hooked and last ones that use cron pattern for scheduling jobs, they are similiar regarding steps but have some difference in definition

First we will talk about scheduled recipes and they are defined like this:

name: job1 // name of recipe
definition: schedule // definition of which type is recipe
schedule:
	min: 0 // every min
	hour: 0 // every hour
	day: 0 // every day
steps: // steps are done from first to last
	- touch ./test.txt
	- echo "Is this working?" > ./test.txt
	- mv ./test.txt ./imwondering.txt

If we set something like this

schedule:
	min: 10 // every min
	hour: 2 // every hour
	day: 2 // every day

Task will run every 2 days 2 hours and 10 mins, if we put just days to 0 then it will run every 2 hours and 10 mins

name: job2 // name of recipe
definition: hook // definition of which type is recipe
id: 45DE2239F // id which we use to find recipe
steps:
	- echo "Hooked!" > ./hook.txt

Hooked recipe can be run by sending request to {host}:{port}/hook?id={id_of_recipe} on which Leprechaun server is listening, for example localhost:11400/hook?id=45DE2239F.

Recipes that use cron pattern to schedule tasks are used like this:

name: job3 // name of recipe
definition: cron // definition of which type is recipe
pattern: * * * * *
steps: // steps are done from first to last
	- touch ./test.txt
	- echo "Is this working?" > ./test.txt
	- mv ./test.txt ./imwondering.txt

Steps also support variables which syntax is $variable, and those are environment variables ex: $LOGNAME and in our steps it will be available as $LOGNAME. We can now rewrite our job file and it will look like something like this:

name: job1 // name of recipe
definition: schedule
schedule:
	min: 0 // every min
	hour: 0 // every hour
	day: 0 // every day
steps: // steps are done from first to last
	- echo "Is this working?" > $LOGNAME

Usage is very straightforward, you just need to start client and it will run recipes you defined previously.

Steps also can be defined as sync/async tasks which is defined by ->, but keep in mind that steps in recipes are performed by linear path because one that is not async can block other from performing, lets take this one as example

steps:
- -> ping google.com
- echo "I will not wait above task to perform, he is async so i will start immidiatelly"

but in this case for example first task will block performing on any task and all others will hang waiting it to finish

steps:
- ping google.com
- -> echo "I need to wait above step to finish, then i can do my stuff"

Step Pipe

Output from one step can be passed to input of next step:

name: job1 // name of recipe
definition: schedule
schedule:
	min: 0 // every min
	hour: 0 // every hour
	day: 0 // every day
steps: // steps are done from first to last
	- echo "Pipe this to next step" }>
	- cat > piped.txt

As you see, first step is using syntax }> at the end, which tells that this command output will be passed to next command input, you can chain like this how much you want.

Step Failure

Since steps are executed linear workers doesn't care if some of the commands fail, they continue with execution, but you get notifications if you did setup those configurations. If you want that workers stop execution of next steps if some command failes you can specifify it with ! like in example:

name: job1 // name of recipe
definition: schedule
schedule:
	min: 0 // every min
	hour: 0 // every hour
	day: 0 // every day
steps: // steps are done from first to last
	- ! echo "Pipe this to next step" }>
	- cat > piped.txt

If first step fails, recipe will fail and all other steps wont be executed

Remote step execution

Steps can be handled by your local machine using regular syntax, if there is any need that you want specific step to be executed by some remote machine you can spoecify that in step provided in example under, syntax is rmt:some_host, leprechaun will try to communicate with remote service that is configured on provided host and will run this command at that host.

name: job1 // name of recipe
definition: schedule
schedule:
	min: 0 // every min
	hour: 0 // every hour
	day: 0 // every day
steps: // steps are done from first to last
	- rmt:some_host echo "Pipe this to next step"

Note that also as regular step this step also can pipe output to next step, so something like this is possible also:

steps: // steps are done from first to last
	- rmt:some_host echo "Pipe this to next step" }>
	- rmt:some_other_host grep -a "Pipe" }>
	- cat > stored.txt

Installation

Go to leprechaun directory and run make install, you will need sudo privileges for this. This will install scheduler, cron, and webhook services.

To install remote service run make install-remote-service, this will create leprechaunrmt binary.

Build

Go to leprechaun directory and run make build. This will build scheduler, cron, and webhook services.

To build remote service run make build-remote-service, this will create leprechaunrmt binary.

Starting/Stopping services

To start leprechaun just simply run it in background like this : leprechaun &

For more available commands run leprechaun --help

Lepretools

For cli tools take a look here

Testing

To run tests with covarage make test, to run tests and generate reports run make test-with-report files will be generated in coverprofile dir. To test specific package run make test-package package=[name]

Issues
  • very good design ,but can give us more detail document to explain?

    very good design ,but can give us more detail document to explain?

    @kilgaloon has any detail document for quick start ?,i think why so few person stars is that the leprechaun is not friendly to users to teach them how to build schedule system with it. your readme is far from enough. I expect see the document as soon as possible.because i like the leprechaun.

    opened by zppro 5
  • add systemd unit file

    add systemd unit file

    I have added very simple systemd unit file, please provide info where would you like it to be located in source tree as i put it now directly into root.

    pc:~ # systemctl start leprechaun.service 
    pc:~ # systemctl status leprechaun.service 
    ● leprechaun.service - Leprechaun service
       Loaded: loaded (/etc/systemd/system/leprechaun.service; disabled; vendor preset: disabled)
       Active: active (running) since Tue 2018-10-30 19:16:02 CET; 1s ago
     Main PID: 21163 (leprechaun)
        Tasks: 11 (limit: 4915)
       CGroup: /system.slice/leprechaun.service
               └─21163 /usr/bin/leprechaun
    
    Oct 30 19:16:02 pc.djule.org systemd[1]: Started Leprechaun service.
    pc:~ # 
    pc:~ # 
    
    opened by bmanojlovic 5
  • Create help command to return available leprechaun commands and explanations

    Create help command to return available leprechaun commands and explanations

    socket.Registrator now has method Command where we specify commands for our app, extend command defining to provide description on how to use specific command.

    This will be used to display each command and description for it when command help is passed

    enhancement help wanted good first issue 1.0.0-alpha client/cli 
    opened by kilgaloon 2
  • Reportcard issues

    Reportcard issues

    Resolve issues reported by report card. Race conditions were detected. consider change tests to go test -race ./client -coverprofile=./client/coverage.txt -covermode=atomic image

    opened by cassiobotaro 1
  • Task priority

    Task priority

    Prioritize tasks with following specifications.

    If all workers are already populated to maximum, and some high priority task is standing in queue to be executed, take lowest priority task and stop it, move it to queue and wait available worker slot to continue on that task.

    Started task can't be paused (needs more investigation), this can be killed and restarted again.

    enhancement help wanted 
    opened by kilgaloon 1
  • Recipe lock file should hold information about that recipe

    Recipe lock file should hold information about that recipe

    Use yml format inside *.lock file.

    Informations in file:

    • [ ] Started
    • [ ] Current step
    • [ ] Errors
    • [ ] ErrorMessage (empty if Errors != true)
    • [ ] Finished (If finished properly)

    If we implement this, we will know did recipe finished properly, next time we can know which recipes to push to queue and which not.

    This can take a Locking recipe functionality into considiration for change.

    Issues that can be affected: #30 #31

    enhancement not backwards compatible client/recipes 
    opened by kilgaloon 1
  • Introduce new step syntax for failed command

    Introduce new step syntax for failed command

    Possible new marker for this should be "!"

    This can only apply to sync steps.

    ex:

    steps:
        - ! cmd // this cmd will fail for some reason
        - another_cmd // this won't be executed because first one failed
    
    enhancement good first issue 
    opened by kilgaloon 0
  • api.Resolver should be more dynamic

    api.Resolver should be more dynamic

    api.Resolver should build endpoint from available informations in command.

    currently resolver match available commands with methods which is not extensible, new resolver should be able to build endpoint alone from command provided and handle it properly

    enhancement 
    opened by kilgaloon 0
  • Steps can be executed by remote hosts

    Steps can be executed by remote hosts

    General idea is that different hosts can execute different steps sync or async. If there is very heavy calculation of something you can run that command on remote host:

    Steps maybe or can look something similar like this

    • remote("calculate something and return me output") }>
    • "i take previous output and do something" }>
    • anotherRemote("Do something again") }>
    • "save that here"

    }> is newly introduced syntax for piping output from one step to another

    enhancement branch:development 
    opened by kilgaloon 0
  • When ini_path flag isn't default for every --cmd you need to provide --ini_path

    When ini_path flag isn't default for every --cmd you need to provide --ini_path

    We need to open default port on which agent with listen and provide basic information about running process through that port.

    When user runs binary and pass --cmd flag check is default port of agent opened, if it is then get info about agent and execute command otherwise print error that agent need to be started first.

    enhancement nice to have core 
    opened by kilgaloon 0
  • Using Cobra/Viper

    Using Cobra/Viper

    Hi... i was reading the project and realized that parsing command line and configuration codes are in daemon/main.go file. How you think about separating daemon and project bootstrapping logic and use some packages like github.com/spf13/{cobra,viper}? thanks

    enhancement 2.0.0 
    opened by Hu13er 1
  • Command to read error log and tail error log

    Command to read error log and tail error log

    Introduce new command that read/tail error/info log

    Possible read commands:

    • leprechaun error-log
    • leprechaun info-log

    Possible tail commands:

    • leprechaun tail-error-log
    • leprechaun tail-info-log

    These commands will try to read global error and info logs if they are specified as global. Every service can have it's own error and info log so we should include default commands in agent for this:

    • leprechaun --cmd="{agent} error-log"
    • leprechaun --cmd="{agent} info-log"
    • leprechaun --cmd="{agent} tail-error-log"
    • leprechaun --cmd="{agent} tail-info-log"
    enhancement help wanted good first issue hacktoberfest 
    opened by kilgaloon 3
Releases(1.5.1)
Owner
Strahinja
Strahinja
Executes jobs in separate GO routines. Provides Timeout, StartTime controls. Provides Cancel all running job before new job is run.

jobExecutor Library to execute jobs in GO routines. Provides for Job Timeout/Deadline (MaxDuration()) Job Start WallClock control (When()) Add a job b

Eswaran SK 0 Jan 10, 2022
clockwork - Simple and intuitive job scheduling library in Go.

clockwork A simple and intuitive scheduling library in Go. Inspired by python's schedule and ruby's clockwork libraries. Example use package main imp

null 27 Mar 3, 2022
Job scheduling made easy.

scheduler Job scheduling made easy. Scheduler allows you to schedule recurrent jobs with an easy-to-read syntax. Inspired by the article Rethinking Cr

Carles Cerezo Guzmán 385 May 21, 2022
goCron: A Golang Job Scheduling Package.

goCron: A Golang Job Scheduling Package.

辣椒面 3k Jul 3, 2022
A programmable, observable and distributed job orchestration system.

?? Overview Odin is a programmable, observable and distributed job orchestration system which allows for the scheduling, management and unattended bac

James McDermott 439 Jun 23, 2022
Machinery is an asynchronous task queue/job queue based on distributed message passing.

Machinery Machinery is an asynchronous task queue/job queue based on distributed message passing. V2 Experiment First Steps Configuration Lock Broker

Richard Knop 6.3k Jul 1, 2022
Run Jobs on a schedule, supports fixed interval, timely, and cron-expression timers; Instrument your processes and expose metrics for each job.

A simple process manager that allows you to specify a Schedule that execute a Job based on a Timer. Schedule manage the state of this job allowing you to start/stop/restart in concurrent safe way. Schedule also instrument this Job and gather metrics and optionally expose them via uber-go/tally scope.

Sherif Abdel-Naby 57 Mar 28, 2022
Simple job queues for Go backed by Redis

bokchoy Introduction Bokchoy is a simple Go library for queueing tasks and processing them in the background with workers. It should be integrated in

Florent Messa 254 Jun 16, 2022
golang job dispatcher

go-gearman The shardingkey is hashed to the same queue, each of which is bound to a worker.

fengyun.rui 17 Apr 4, 2022
Job worker service that provides an API to run arbitrary Linux processes.

Job Scheduler Summary Prototype job worker service that provides an API to run arbitrary Linux processes. Overview Library The library (Worker) is a r

Renato Guimarães 8 May 26, 2022
A simple job scheduler backed by Postgres.

A simple job scheduler backed by Postgres used in production at https://operand.ai. Setup needs two environment variables, SECRET and ENDPOINT. The se

Morgan Gallant 8 Jun 15, 2022
goInterLock is golang job/task scheduler with distributed locking mechanism (by Using Redis🔒).

goInterLock is golang job/task scheduler with distributed locking mechanism. In distributed system locking is preventing task been executed in every instant that has the scheduler,

Jay Ehsaniara 22 Jun 11, 2022
xxl-job 对应的golang客户端

xxl-job-go-client xxl-job 对应的golang客户端 提供Elasticsearch 日志组件,把job执行过程写入elasticsearch方便跟踪查询 func main() { exec := xxl.NewExecutor( xxl.ServerAd

Ronnie 6 Jun 22, 2022
a self terminating concurrent job queue for indeterminate workloads in golang

jobtracker - a self terminating concurrent job queue for indeterminate workloads in golang This library is primarily useful for technically-recursive

maia tillie arson crimew 8 Nov 23, 2021
A zero-dependencies and lightweight go library for job scheduling

A zero-dependencies and lightweight go library for job scheduling.

null 1 Jan 3, 2022
Tasqueue is a simple, lightweight distributed job/worker implementation in Go

Tasqueue Tasqueue is a simple, lightweight distributed job/worker implementation in Go Concepts tasqueue.Broker is a generic interface that provides m

Lakshay Kalbhor 237 Jun 22, 2022
Cloud-native, enterprise-level cron job platform for Kubernetes

Furiko Furiko is a cloud-native, enterprise-level cron and adhoc job platform for Kubernetes. The main website for documentation and updates is hosted

Furiko 148 Jul 2, 2022
Crane scheduler is a Kubernetes scheduler which can schedule pod based on actual node load.

Crane-scheduler Overview Crane-scheduler is a collection of scheduler plugins based on scheduler framework, including: Dynamic scheuler: a load-aware

Crane 43 Jun 28, 2022
This app can extend developer mode timer for the LG TV (you'll need the session token for that)

LG WebOS Developer Mode Extender This app can extend developer mode timer for the LG TV (you'll need the session token for that) You can obtain the de

null 1 Dec 17, 2021