LiveKit - Open source, distributed video/audio rooms over WebRTC


LiveKit - Open source, distributed video/audio rooms over WebRTC

LiveKit is an open source project that provides scalable, multi-user conferencing over WebRTC. It's designed to give you everything you need to build real time video/audio capabilities in your applications.


  • Horizontally scalable WebRTC Selective Forwarding Unit (SFU)
  • Modern, full-featured client SDKs for JS, iOS, Android
  • Built for production - JWT authentication and server APIs
  • Robust networking & connectivity, over UDP & TCP
  • Easy to deploy - pure Go & single binary
  • Advanced features - speaker detection, simulcasting, selective subscription, moderation APIs.

Documentation & Guides

Docs & Guides at:

Try it live

Head to our playground and give it a spin. Build a Zoom-like conferencing app in under 100 lines of code!

SDKs & Tools

Client SDKs:

Server SDKs:



From source


  • Go 1.15+ is installed
  • GOPATH/bin is in your PATH
  • protoc is installed and in PATH

Then run

git clone
cd livekit-server


LiveKit is published to Docker Hub under livekit/livekit-server


Creating API keys

LiveKit utilizes JWT based access tokens for authentication to all of its APIs. Because of this, the server needs a list of valid API keys and secrets to validate the provided tokens. For more, see Access Tokens guide.

Generate API key/secret pairs with:

./bin/livekit-server generate-keys


docker run --rm livekit/livekit-server generate-keys

Store the generate keys in a YAML file like:

APIwLeah7g4fuLYDYAJeaKsSE: 8nTlwISkb-63DPP7OH4e.nw.J44JjicvZDiz8J59EoQ+

Starting the server

In development mode, LiveKit has no external dependencies. You can start LiveKit by passing it the keys it should use in LIVEKIT_KEYS. LiveKit could also use a config file or config environment variable LIVEKIT_CONFIG

: " ./bin/livekit-server --dev ">
LIVEKIT_KEYS=": " ./bin/livekit-server --dev


: " \ livekit/livekit-server \ --dev \ --node-ip= ">
docker run --rm \
  -p 7880:7880 \
  -p 7881:7881 \
  -p 7882:7882/udp \
  -e LIVEKIT_KEYS=": " \
  livekit/livekit-server \
  --dev \

When running with docker, --node-ip needs to be set to your machine's local IP address.

The --dev flag turns on log verbosity to make it easier for local debugging/development

Creating a JWT token

To create a join token for clients, livekit-server provides a convenient subcommand to create a development token. This token has an expiration of a month, which is useful for development & testing, but not appropriate for production use.

./bin/livekit-server --key-file <path/to/keyfile> create-join-token --room "myroom" --identity "myidentity"

Sample client

To test your server, you can use our example web client (built with our React component)

Enter generated access token and you are connected to a room!

Deploying for production

LiveKit is deployable to any environment that supports docker, including Kubernetes and Amazon ECS.

See deployment docs at


We welcome your contributions to make LiveKit better! Please join us on Slack to discuss your ideas and/or submit PRs.


LiveKit server is licensed under Apache License v2.0.

  • Client is missing published tracks in certain conditions

    Client is missing published tracks in certain conditions

    Describe the bug @bekriebel reported this in slack. When users are joining the room at the same time from across different regions (where latency is a bigger concern), sometimes clients are reporting TrackSubscriptionFailure events with the following logs:

    could not find published track PA_8qxvzPbj3G3R TR_QKFBfQyMBW6Q
    addSubscribedMediaTrack @ RemoteParticipant.js?f400:71
    eval @ RemoteParticipant.js?f400:78
    setTimeout (async)
    addSubscribedMediaTrack @ RemoteParticipant.js?f400:77
    eval @ RemoteParticipant.js?f400:78
    setTimeout (async)
    addSubscribedMediaTrack @ RemoteParticipant.js?f400:77
    eval @ RemoteParticipant.js?f400:78

    According to @bekriebel, he's able to produce this if the server instance is located in a node far way from him. (Frankfurt to Seattle)


    • Version: 0.13.6


    • SDK: JS
    • Version: 0.13.6
    opened by davidzhao 31
  • Data messages not sending in subscribe only mood

    Data messages not sending in subscribe only mood

    In my case I wanted user can only participants without sharing microphone or webcam. It's like subscribe only/listen only mood. But in that mood data messages aren't sending. I'm getting this error in my JS client:

    Uncaught (in promise) DOMException: An attempt was made to use an object that is not, or is no longer, usable

    Am I misunderstanding somewhere? But if that user connect with webcam then messages are sending as expected.

    opened by jibon57 15
  • Support TURN server without TLS cert

    Support TURN server without TLS cert

    Hi, we're trying to understand why is the TLS cert required for the TURN server.

    We're considering deploying in DigitalOcean with k8s and their load balancer. The load balancer will terminate the SSL before it reaches the cluster.

    However, following the Helm chart and also the server source code, it seems like the TLS cert is necessary. Is there any way to run TURN without the cert?

    opened by alvinthen 15
  • High activity rooms cause `

    High activity rooms cause `"error": "channel is full"` errors


    We had a live call with over 90 people. Things were going fine early on, but then we noticed users weren't able to publish video/audio, and suddenly some users who did publish would come through as a black screen. The error we were seeing over and over in the server looks like this:

    Apr 2, 2021 @ 14:27:11.777 | <14>1 2021-04-02T18:27:10.858904Z - - - - - 	/workspace/pkg/routing/redisrouter.go:311 |  
    -- | -- | --
      |   | Apr 2, 2021 @ 14:27:11.777 | <14>1 2021-04-02T18:27:10.858898Z - - - - -*RedisRouter).redisWorker |  
      |   | Apr 2, 2021 @ 14:27:11.777 | <14>1 2021-04-02T18:27:10.859405Z - - - - - 2021-04-02T18:27:10.859Z	ERROR	routing/redisrouter.go:311	error processing signal message	{"error": "channel is full"} |  
      |   | Apr 2, 2021 @ 14:27:11.777 | <14>1 2021-04-02T18:27:10.859435Z - - - - -*RedisRouter).redisWorker |  
      |   | Apr 2, 2021 @ 14:27:11.777 | <14>1 2021-04-02T18:27:10.859444Z - - - - - 	/workspace/pkg/routing/redisrouter.go:311 |  
      |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.854218Z - - - - -*RedisRouter).redisWorker |  
      |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.855512Z - - - - -*RedisRouter).redisWorker |  
      |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.855518Z - - - - - 	/workspace/pkg/routing/redisrouter.go:311 |  
      |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.854223Z - - - - - 	/workspace/pkg/routing/redisrouter.go:311 |  
      |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.858236Z - - - - - 	/workspace/pkg/routing/redisrouter.go:311 |  
      |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.856528Z - - - - -*RedisRouter).redisWorker |  
      |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.858231Z - - - - -*RedisRouter).redisWorker

    It looks like the channel used for the socket is filling up and once that happens, the connected user doesn't receive any new events and can't publish any new events either

    opened by atbe 12
  • Quickly muting and unmuting an audio track adds a 1-2 second delay to the audio

    Quickly muting and unmuting an audio track adds a 1-2 second delay to the audio

    Describe the bug When an audio track is disabled and re-enabled quickly, it will often cause a delay to be introduced into the audio track. The delay is a noticeable 1-2 seconds and it causes the audio track to be off-sync with the video track if both are published. If multiple clients are connected, the delay often arises in all of the connected clients, but not always the same amount of delay. If either the published or subscriber disconnects and reconnects, the delay will be fixed.


    • Version: [0.14.2]
    • Environment: Docker image on


    • SDK: js
    • Version: 0.14.3

    To Reproduce Steps to reproduce the behavior:

    1. Connect two clients to a room using the client-js sample app. This is easiest seen in two windows on one machine, but reproduces across multiple machines as well. It is best to use a low-latency server so normal audio has minimal delay.
    2. Mute the audio of one client so you are only testing from a single source, referred to from here on out as the sender.
    3. Notice that on the receiver has minimal delay and the audio is in sync with the video stream (I tested by snapping my fingers for a good indicator).
    4. Use the Disable Audio button to mute track on the sender, wait a few seconds and re-enable the audio. Note that no delay is introduced.
    5. Disable the audio track and then quickly enable it again (double tap the button)
    6. Check the audio delay on the receiver, often times a 1-2 second delay will be introduced

    It may take multiple tries of step 5 to get the issue to reproduce. If I tap the button 10 times in quick succession, (5 mute/unmute cycles), it reproduces for me almost every time. It's possible that the issue also reproduces when not muting and unmuting quickly, but I was not able to observe this.

    Expected behavior Audio delay should not be introduced by disabling/enabling the audio track. Audio and video should remain in sync.

    Screenshots N/A

    Additional context It's possible this is a client-js bug and not livekit-server, but I do not have a good setup to test this at the moment. I did not notice anything relevant in the client or server-side logs.

    opened by bekriebel 11
  • Lingering connections even after a client socket is disconnected

    Lingering connections even after a client socket is disconnected

    I'm not sure what to include for this when it comes to logs because I can't pinpoint this down to a specific error

    When there is a flood of new connections, some of the participants never get cleaned out when their socket disconnects

    I ran this command to see how many connections were open on the server:

    # cat /proc/net/tcp | wc -l

    Although one of the rooms I'm currently in reports 926 participants and the redis records for all the disconnected users are still there.

    This causes a miscount in how many participants are actually in the call that cannot be cleaned up without manually clearing redis, or starting a new room.

    Please let me know if there's something I can include to make this bug more clear. I may be able reproduce it on the livekit sample if I spam the server with lots of connections all at once.

    The only thing I see in the logs repeatedly is

    2021-04-05T03:51:51.950Z        ERROR   routing/redisrouter.go:317      error processing signal message{"error": "channel is full"}*RedisRouter).redisWorker
    2021-04-05T03:51:51.950Z        ERROR   routing/redisrouter.go:317      error processing signal message{"error": "channel is full"}*RedisRouter).redisWorker

    when this occurred

    opened by atbe 11
  • Sometimes clients will be stuck on the `connecting to wss://SERVER_URL/rtc?access_token=` message

    Sometimes clients will be stuck on the `connecting to wss://SERVER_URL/rtc?access_token=` message

    I'm not sure what logs to provide for this because it happens even on a fresh server boot. Here are the logs:

    2021-02-18T10:17:13.589Z        INFO    server/main.go:145      configured key provider {"num_keys": 1}
    2021-02-18T10:17:13.711Z        INFO    server/main.go:178      using multi-node routing via redis     {"address": "livespot-lk-redis-production:6379"}
    2021-02-18T10:17:13.714Z        INFO    service/roommanager.go:88       deleting room state     {"room": "2c035db5-8c1e-47df-903f-03e245b496fc"}
    2021-02-18T10:17:13.716Z        INFO    service/roommanager.go:88       deleting room state     {"room": "07fb1453-21ca-4cd8-ba51-28c5775ec56d"}
    2021-02-18T10:17:13.718Z        DEBUG   routing/redisrouter.go:288      starting redisWorker    {"node": "ND_uduvfI4D"}
    2021-02-18T10:17:13.718Z        INFO    service/server.go:110   starting LiveKit server {"address": ":7880", "nodeId": "ND_uduvfI4D", "version": "0.5.1"}
    2021-02-18T13:29:59.805Z        INFO    service/rtcservice.go:113       new client WS connected {"connectionId": "CO_YdqJFkYLMt8M", "room": "RM_XHgJhgiy8eFS", "roomName": "8fb5bc1e-1918-4240-9e05-b1c01420152c", "name": "0xFFC80bd2A413f37E125D39C281Cc85B88dcebF20"}
    2021-02-18T13:30:11.927Z        INFO    service/rtcservice.go:97        WS connection closed    {"participant": "0xFFC80bd2A413f37E125D39C281Cc85B88dcebF20", "connectionId": "CO_YdqJFkYLMt8M"}
    2021-02-18T13:30:15.610Z        INFO    service/rtcservice.go:113       new client WS connected {"connectionId": "CO_C2rQZ7M839Yn", "room": "RM_XHgJhgiy8eFS", "roomName": "8fb5bc1e-1918-4240-9e05-b1c01420152c", "name": "0xFFC80bd2A413f37E125D39C281Cc85B88dcebF20"}

    On the client side all we see is:

    connecting to wss://SERVER_URL/rtc?access_token=

    And in the network tab:



    The only solution to this that we've found is to reboot the livekit-server

    opened by atbe 10
  • Runtime error: Use of closed socket

    Runtime error: Use of closed socket

    Another one of these popped up today:

    2021-02-15T02:37:49.607Z        ERROR   service/rtcservice.go:131       source closed connection       {"participant": "0x46efc301B793a0d8C2999B11d8Bad43D1b4c4E8F"}*RTCService).ServeHTTP.func2
    2021-02-15T02:37:49.607Z        ERROR   service/rtcservice.go:157       error reading from websocket   {"error": "read tcp> use of closed network connection"}*RTCService).ServeHTTP
            /go/pkg/mod/[email protected]/negroni.go:46
            /go/pkg/mod/[email protected]/negroni.go:29
            /go/pkg/mod/[email protected]/negroni.go:38
            /go/pkg/mod/[email protected]/negroni.go:38*Recovery).ServeHTTP
            /go/pkg/mod/[email protected]/recovery.go:193
            /go/pkg/mod/[email protected]/negroni.go:38*Negroni).ServeHTTP
            /go/pkg/mod/[email protected]/negroni.go:96
    2021-02-15T02:37:49.607Z        INFO    service/rtcservice.go:95        WS connection closed    {"participant": "0x46efc301B793a0d8C2999B11d8Bad43D1b4c4E8F"}
    opened by atbe 10
  • Cannot connect to insecure Websocket

    Cannot connect to insecure Websocket

    I am trying to set up a server to test this and use your frontend example before i build my own, but every time i try to connect I get the error saying can not connect to insecure websocket from https. I then set up a reverse proxy and secured the backend, but it still gives me the same issue! Any advice?

    opened by rjahrj 9
  • [Feature Request] metadata for rooms

    [Feature Request] metadata for rooms


    when reading through the docs I came across this section in "Working with data":

    LiveKit participants have a metadata field that you could use to store application-specific data. For example, this could be the seat number around a table, or other state.

    Maybe I misunderstood the use case here, but wouldn't something like a seat number rather be data that only needs to be updated for one entity - namely the room itself, instead for each of the participants?

    I found I have the need for displaying state to the user that is room specific state or rather exactly the same for all participants of a room, for example the current number of participants of other rooms. To update room specific state on each participant seems highly redundant so I was wondering if there is a possibility to have a (server-)api for changing a metadata property for the whole room? Something like updateRoom in addition to the already existing updateParticipant.


    opened by lukasIO 9
  • Room stuck with `packetio.Buffer full, discarding write`

    Room stuck with `packetio.Buffer full, discarding write`

    Describe the bug A user reported a freeze during a session. The following logs are printed.

    Feb 03 14:24:36 livekit[663]: 2022-02-03T14:24:36.375+0100        ERROR        livekit.mux        mux/mux.go:122        mux: ending readLoop dispatch error packetio.Buffer is full, discarding write
    Feb 03 14:24:36 livekit[663]:         {"room": "73741964-bd91-4652-803e-e1b8b7a293ef", "roomID": "RM_tNbUBZLRCCE2", "participant": "XYZ", "pID": "PA_vQ83QWbB3HVP"}
    Feb 03 14:24:36 livekit[663]:*Mux).readLoop
    Feb 03 14:24:36 livekit[663]:         /XYZ/go/pkg/mod/[email protected]/internal/mux/mux.go:122

    At which point, subscriber could no longer receive video from XYZ; while XYZ could continue to subscribe to the other person's stream.

    This seems related to what we had seen on FireFox when it rejects packets due to SRTPReplayProtection. Perhaps that'd be a good way to reproduce the underlying buffer issue.


    • Version: 0.15.3


    • SDK: JS / Chrome
    opened by davidzhao 7
  • Feature request: Option to reject outdated clients

    Feature request: Option to reject outdated clients

    It would be nice to have a server-side option to reject clients that are out of date. Thought it'd be great if this could be done by the client version itself, I imagine it would need to be done by protocol version instead to avoid having to track each individual client SDK's version.

    Perhaps something like MinClientSdk that could be set and would return a connection error to any client that attempted to connect outside of the specified version?

    opened by bekriebel 3
  • Current speakers not sent to new participants

    Current speakers not sent to new participants

    When a new participant joins, if there are current active speakers, it will not get those updates.

    Since speaker updates are sent as deltas, we should be updating new participants with the list of current active speakers.

    opened by davidzhao 0
  • Lots and lots of race conditions

    Lots and lots of race conditions

    Describe the bug building and running the server with -race reveals a lot of race conditions going on.


    • Version: 0.15.6
    • Environment: local dev


    • SDK: flutter
    • Version: 0.5.6

    To Reproduce Steps to reproduce the behavior:

    1. build the server with -race
    2. two clients are connected to room (one can be the go server sdk)
    3. See error

    Expected behavior No race conditions should occur

    Screenshots Lots. E.g.

    Write at 0x00c005dda910 by goroutine 114:*MediaTrack).ToProto()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/mediatrack.go:103 +0xa6*UpTrackManager).ToProto()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/uptrackmanager.go:76 +0x1c1*ParticipantImpl).ToProto()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/participant.go:360 +0x698
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/utils.go:51 +0xdd*Room).broadcastParticipantState()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/room.go:705 +0x93*Room).onTrackPublished()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/room.go:592 +0x84*Room).onTrackPublished-fm()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/room.go:590 +0x6d*ParticipantImpl).handleTrackPublished()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/participant.go:1405 +0xa3*ParticipantImpl).mediaTrackReceived()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/participant.go:1393 +0x484*ParticipantImpl).onMediaTrack()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/participant.go:920 +0xce*ParticipantImpl).onMediaTrack-fm()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/participant.go:910 +0x4d*PeerConnection).onTrack·dwrap·70()
    /home/matthew/go/pkg/mod/[email protected]/peerconnection.go:459 +0x58
    Previous read at 0x00c005dda910 by goroutine 111:
    /nix/store/j8zd71jnc6r7lhh45jwk9ywygr4w68c9-go-1.17.8/share/go/src/reflect/value.go:285 +0x51
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/message_reflect_field.go:286 +0x28f*messageState).Range()
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/message_reflect_gen.go:48 +0x21e
    /home/matthew/go/pkg/mod/[email protected]/internal/order/range.go:50 +0x21a
    /home/matthew/go/pkg/mod/[email protected]/encoding/protojson/encode.go:223 +0x452
    /home/matthew/go/pkg/mod/[email protected]/encoding/protojson/encode.go:304 +0x6c8
    /home/matthew/go/pkg/mod/[email protected]/encoding/protojson/encode.go:248 +0x18f
    /home/matthew/go/pkg/mod/[email protected]/encoding/protojson/encode.go:232 +0x213
    /home/matthew/go/pkg/mod/[email protected]/internal/order/range.go:60 +0x3d9
    /home/matthew/go/pkg/mod/[email protected]/encoding/protojson/encode.go:223 +0x452
    /home/matthew/go/pkg/mod/[email protected]/encoding/protojson/encode.go:136 +0x1cb
    /home/matthew/go/pkg/mod/[email protected]/encoding/protojson/encode.go:110 +0xa4
    /home/matthew/go/pkg/mod/[email protected]/encoding/protojson/encode.go:39 +0xa5*notifier).Notify()
    /home/matthew/go/pkg/mod/[email protected]/webhook/notifier.go:50 +0x73*telemetryServiceInternal).notifyEvent.func1()
    /home/matthew/go/pkg/mod/[email protected]/pkg/telemetry/telemetryserviceinternalevents.go:278 +0x8f
    /home/matthew/go/pkg/mod/[email protected]/workerpool.go:243 +0x34·dwrap·6()
    /home/matthew/go/pkg/mod/[email protected]/workerpool.go:234 +0x39

    In general, there seem to be a lot involving how protobuf msgs are constructed.

    Write at 0x00c005dda928 by goroutine 114:*MediaTrack).ToProto()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/mediatrack.go:111 +0x217*MediaTrackSubscriptions).AddSubscriber()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/mediatracksubscriptions.go:270 +0x153c*MediaTrackReceiver).AddSubscriber()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/mediatrackreceiver.go:198 +0x3d4*MediaTrack).AddSubscriber()
    <autogenerated>:1 +0x77*UpTrackManager).AddSubscriber()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/uptrackmanager.go:127 +0x67b*ParticipantImpl).AddSubscriber()
    <autogenerated>:1 +0xb9*Room).onTrackPublished()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/room.go:615 +0x745*Room).onTrackPublished-fm()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/room.go:590 +0x6d*ParticipantImpl).handleTrackPublished()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/participant.go:1405 +0xa3*ParticipantImpl).mediaTrackReceived()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/participant.go:1393 +0x484*ParticipantImpl).onMediaTrack()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/participant.go:920 +0xce*ParticipantImpl).onMediaTrack-fm()
    /home/matthew/go/pkg/mod/[email protected]/pkg/rtc/participant.go:910 +0x4d*PeerConnection).onTrack·dwrap·70()
    /home/matthew/go/pkg/mod/[email protected]/peerconnection.go:459 +0x58
    Previous read at 0x00c005dda928 by goroutine 129:
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/pointer_unsafe.go:119 +0x3f7*MessageInfo).marshalAppendPointer()
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/encode.go:136 +0x3a9
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/codec_field.go:485 +0x20e*MessageInfo).marshalAppendPointer()
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/encode.go:139 +0x482
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/codec_field.go:485 +0x20e*MessageInfo).marshalAppendPointer()
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/encode.go:139 +0x482
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/codec_field.go:238 +0x190*MessageInfo).initOneofFieldCoders.func4()
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/codec_field.go:96 +0x105*MessageInfo).marshalAppendPointer()*MessageInfo).marshal()
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/encode.go:107 +0xd0*MessageInfo).marshal-fm()
    /home/matthew/go/pkg/mod/[email protected]/internal/impl/encode.go:100 +0xd4
    /home/matthew/go/pkg/mod/[email protected]/proto/encode.go:163 +0x3b9
    /home/matthew/go/pkg/mod/[email protected]/proto/encode.go:79 +0x59*WSSignalConnection).WriteResponse()
    /home/matthew/go/pkg/mod/[email protected]/pkg/service/wsprotocol.go:84 +0x105*RTCService).ServeHTTP.func2()
    /home/matthew/go/pkg/mod/[email protected]/pkg/service/rtcservice.go:230 +0x4d4

    There's not a shortage of them.

    Data races can lead to unexpected behaviour and very hard to debug problems. These should all be solved, most likely by judiciously adding mutexes, or using atomics.

    opened by msackman 3
  • Some errors from production environment.

    Some errors from production environment.

    @davidzhao @boks1971

    I have kept all the logs this time, please check the attachment. Here are some of the errors I've highlighted. The binary I'm running is compiled from the latest source code.

    Thanks again to the LiveKit team for providing such a great product.

    2022-04-07T13:15:15.015+0800 ERROR livekit rtc/signalhandler.go:24 could not handle answer {"room": "random_newl_2_1649308512137", "roomID": "RM_4yvUv4YjLvs6", "participant": "14986346", "pID": "PA_48uPq8yM9kvh", "error": "could not set remote description: InvalidModificationError: invalid proposed signaling state transition: stable->SetRemote(answer)->stable", "errorVerbose": "InvalidModificationError: invalid proposed signaling state transition: stable->SetRemote(answer)->stable\ncould not set remote description\*ParticipantImpl).HandleAnswer\n\t/root/livekit-server/pkg/rtc/participant.go:508\\n\t/root/livekit-server/pkg/rtc/signalhandler.go:23\*RoomManager).rtcSessionWorker\n\t/root/livekit-server/pkg/service/roommanager.go:424\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1571"} /root/livekit-server/pkg/rtc/signalhandler.go:24*RoomManager).rtcSessionWorker /root/livekit-server/pkg/service/roommanager.go:424 2022-04-07T13:15:15.016+0800 INFO livekit service/rtcservice.go:225 source closed connection {"room": "random_newl_2_1649308512137", "participant": "14986346", "connID": "CO_Dyb7KRKVgoHi"}

    2022-04-06T23:10:18.447+0800 ERROR livekit.mux mux/mux.go:121 mux: ending readLoop dispatch error io: read/write on closed pipe {"room": "random_newl_2_1649257798784", "roomID": "RM_Puc9WfPrKE4n", "participant": "14956419", "pID": "PA_xa4imABDP6uT", "transport": "PUBLISHER"}*Mux).readLoop /root/godev/pkg/mod/[email protected]/internal/mux/mux.go:121

    2022-04-07T02:17:17.038+0800 ERROR livekit logger/logger.go:34 could not add down track {"participant": "14999873", "pID": "PA_5pDHbgHW2YPi", "error": "DownTrack already exist"} /root/godev/pkg/mod/[email protected]/logger/logger.go:34*MediaTrackReceiver).AddSubscriber /root/livekit-server/pkg/rtc/mediatrackreceiver.go:209*UpTrackManager).AddSubscriber /root/livekit-server/pkg/rtc/uptrackmanager.go:126*Room).onTrackPublished /root/livekit-server/pkg/rtc/room.go:616*ParticipantImpl).handleTrackPublished /root/livekit-server/pkg/rtc/participant.go:1441*ParticipantImpl).mediaTrackReceived /root/livekit-server/pkg/rtc/participant.go:1429*ParticipantImpl).onMediaTrack /root/livekit-server/pkg/rtc/participant.go:925

    2022-04-07T05:04:32.750+0800 ERROR livekit rtc/transport.go:358 could not set local description {"room": "random_newl_2_1649279054705", "roomID": "RM_JDMFLq9Hhmw7", "participant": "14988207", "pID": "PA_seQeypW9vamq", "transport": "SUBSCRIBER", "error": "InvalidStateError: connection closed"}*PCTransport).createAndSendOffer /root/livekit-server/pkg/rtc/transport.go:358*PCTransport).CreateAndSendOffer /root/livekit-server/pkg/rtc/transport.go:297*PCTransport).Negotiate.func1 /root/livekit-server/pkg/rtc/transport.go:288 2022-04-07T05:04:32.750+0800 ERROR livekit rtc/transport.go:289 could not negotiate {"room": "random_newl_2_1649279054705", "roomID": "RM_JDMFLq9Hhmw7", "participant": "14988207", "pID": "PA_seQeypW9vamq", "transport": "SUBSCRIBER", "error": "InvalidStateError: connection closed"}*PCTransport).Negotiate.func1 /root/livekit-server/pkg/rtc/transport.go:289

    2022-04-07T06:08:05.528+0800 ERROR livekit.pc [email protected]/peerconnection.go:1582 Incoming unhandled RTP ssrc(2731095130), OnTrack will not be fired. incoming SSRC failed Simulcast probing {"room": "random_newl_2_1649282864935", "roomID": "RM_embtrPdkqKjF", "participant": "14974334", "pID": "PA_ZRUNQot4VdEd", "transport": "PUBLISHER"}*PeerConnection).undeclaredMediaProcessor.func1.1 /root/godev/pkg/mod/[email protected]/peerconnection.go:1582

    2022-04-07T06:39:35.790+0800 ERROR livekit.sctp [email protected]/association.go:2427 [0xc002ec9180] retransmission failure: T1-init {"room": "random_newl_2_1649284430786", "roomID": "RM_wKVjGU9kvq84", "participant": "14732048", "pID": "PA_YpHsJfdx7fTu", "transport": "SUBSCRIBER"}*Association).onRetransmissionFailure /root/godev/pkg/mod/[email protected]/association.go:2427*rtxTimer).start.func1 /root/godev/pkg/mod/[email protected]/rtx_timer.go:158

    2022-04-07T00:22:52.557+0800 ERROR livekit rtc/mediatracksubscriptions.go:407 could not write RTCP {"room": "random_newl_2_1649262155533", "roomID": "RM_tvXi7pjd2Dtr", "participant": "14994548", "pID": "PA_MUCz9nMyTHgR", "trackID": "TR_VCLqHaSgq2cp7k", "error": "io: read/write on closed pipe"}


    opened by CensorKao 10
  • v0.15.7(May 3, 2022)


    • Supports IPv6 networks by default #571
    • NodeSelector to support sort options #599 (thanks @bekriebel)
    • Supports adaptiveStream flag - starts stream in a paused state for adaptive stream capable clients #623 #631


    • Disallow identity that is an empty string #580
    • Returns Participant.region to clients for multi-region deployments #585
    • TrackIDs indicates the type and source of track #586
    • Reduce contention during session starts #614
    • Improved docker connectivity with using srflx candidates #624
    • Exposes Participant.isPublisher to indicate publisher vs subscriber #643
    • Reduced memory usage of internal stats accounting #645
    • Callback improvements #655 #652 #651


    • Improved available layer tracking #575
    • Avoid locking in callback #588
    • Prevent negative timestamp difference #595
    • Avoid locking when flushing DownTrack #594
    • Fixes server locking up sometimes with TCP connections #606
    • Fixed dynacast settings lost after ICE restart #620
    • Increase sizes of message queues to ensure delivery reliability #638 #641
    • Fixed connections silently disconnecting due to aggressive nomination #642 #644
    • Fixed memory leaks in MessageChannel #646
    • Correctly determine number of CPUs in a non-linux environment #653
    • Fixed node-ip parameter being ignored, leading to connectivity issues in local env #661
    Source code(tar.gz)
    Source code(zip)
  • v0.15.6(Mar 29, 2022)


    • Enable the ability to filter out certain network interfaces to avoid duplicate candidates #502
    • Support for Redis TLS connections #482 (thanks @alexbeattie42)
    • Client configuration system for detecting device specific issues/limitations #452
    • Supports TrackPublished and TrackUnpublished webhooks, along with other webhooks improvements #535
    • Unpublish tracks automatically when publish permissions are revoked for a participant #545
    • Support for upcoming Egress service


    • Quality improvements to congestion controller: more stable stream allocations #532 #544 #549 #551 #557
    • Congestion controller now defaults to not pausing video by default #554
    • Passes serverRegion back when a participant is joining #479
    • Improved handling of simulcasted screenshares #503
    • Speaker events are now only emitted for audio level changes on microphone tracks #553 (thanks @sibis)
    • Dynacast now throttles downgrade events to reduce unnecessary changes #556 #558
    • Enable size limits to room & participant metadata #566


    • Fixed potential race condition when creating RTC room #485 (thanks @b20132367)
    • Fixed panic when writing to closed RTCP channel #495
    • Fixed RTCP worker stopped due to nil packets #504
    • Prevent StreamTracker from declaring base layer video to be stopped incorrectly #530
    • Fixed connection stall when non-primary peer connection becomes disconnected #537 #548
    • Fixed timestamp jump upon layer switch #543
    • Fixed deadlocks within Pion mux with 3.1.27 #555
    • Compatibility with Go 1.18
    • Fixed connectivity with Firefox when no tracks are subscribed #565
    • Always re-issue token to prevent client disconnecting before refresh interval #569
    Source code(tar.gz)
    Source code(zip)
  • v0.15.5(Mar 2, 2022)

    Changes in 0.15.5

    • Improved default speaker detection sensitivity #427
    • Improved handling of network congestion #429 #433
    • Use padding to probe instead of higher layers #434
    • Throttle retransmissions to prevent RTX storm #440
    • Include NACK ratio in congestion detection #443
    • Fixed stream update sending incorrect publisher ID #432
    • Fixed issue where screensharing would pause with Chrome 97+ #437
    • Fixed allocation retry in TURN #445
    • Avoid deadlocks in room close #451
    • Close websocket instead of hang on connection failure #458
    • Disable default node limits #472
    Source code(tar.gz)
    Source code(zip)
  • v0.15.4(Feb 9, 2022)

    Bring your own TURN servers #409

    You can now use any custom TURN server with LiveKit, including third-party TURN services. By setting rtc.turn_servers in the config, LiveKit will configure all connected clients to use specified TURN servers.

    Bugfixes and improvements

    • Fixed deadlock causing underlying buffer to become full #413
    • Disabled SRTP replay protection to improve Firefox compatibility #394
    • Improve connection reliability over links with longer latency #405
    • Lower DTLS retransmission interval to improve initial connection speed #414
    • Disable use of ICELite by default #397 #408
    • Smoother dynamic broadcast transition #389
    • Thread safety with map traversal #388
    • Use a single buffered channel for RTCP messages #418
    • Use message versions to better prevent race #399
    • Simplification/improvement of sfu.Buffer #398 #402
    • Improved context with logging #391 #416
    Source code(tar.gz)
    Source code(zip)
  • v0.15.3(Jan 29, 2022)

    Ability to disable room auto-creation #361

    In some cases, you may want to prevent rooms from being created automatically. (i.e. a streamer ended a session, so viewers should not be able to join)

    It's possible to disable auto-creation behavior from configuration.

    Automatic token refresh #365

    For long running sessions, the session may still be running after the client's connection token had expired. livekit-server will now automatically send clients refreshed tokens so clients will always have valid tokens to reconnect with.

    RoomService returns only after operation is complete #362

    Previously, RoomService would return a response before the operation is actually completed. This would lead to synchronization challenge from clients. In v0.15.3, RoomService behaves like you would expect: operation is completed before it returns.

    Other Changes

    • Use ICE-Lite to let clients take controlling role #322
    • Code refactoring for improved re-use
    • Simulate scenarios to allow client tests #330
    • Prevent multiple resume notifications for track changes #334
    • Enable CORS for RoomService #335
    • Integrated logging with Pion (#341)
    • Fixed missing tracks during long latency links #346
    • Fixed race condition when the room is closing when another participant is joining at the same time #370
    • Improved transceiver-reuse, avoid sending potentially un-decodable frames to clients #382
    • Honor auto-subscribe for participants who's given subscribe permissions after joining #381
    Source code(tar.gz)
    Source code(zip)
  • v0.15.2(Jan 6, 2022)

    Changes in 0.15.2


    Ability to dynamically publish only layers that are being subscribed, significantly improving resource consumption on publishing clients. #295

    Scoped speaker updates

    Speaker updates will only be sent to subscribers. Other participants in the room will not receive updates. #280 #301

    List rooms by name(s)

    The ability to list rooms that match a particular set of inputs #290

    Webhook event uuid and timestamp

    Webhook callbacks will now include an unique ID as well as timestamp of the event. This enables idempotent processing of events on the listener side: #291

    Track MIME type

    TrackInfo now includes a MIME type field that identifies the codec used (i.e. video/h264 or video/vp8) #292

    Participant name

    Ability to attach a participant name in addition to identity. This should be set inside of the JWT token #293

    Configurable congestion control

    The ability to disable congestion control #305. This option could be set in configuration.


    • Close RTCP channel after published tracks are fully closed #286
    • Fix rare deadlock when waiting on a participant that stopped publishing #288
    • Handle IP resolution failure instead of silently failing #289
    • Fixed recording service requests for specific URL 7b0db1f3446c7bfa13ed7080d5a5b7435ad58110
    Source code(tar.gz)
    Source code(zip)
  • v0.15.1(Dec 22, 2021)

    Downstream congestion control

    We are introducing a significant improvement to the core SFU. It now monitors each subscriber's connection for congestion, and when detected, it controls bandwidth consumption for that subscriber by switching to lower simulcast layers at reduced bitrates. In case congestion gets worse, it'll prioritize audio and pause certain video tracks.

    The addition of this feature enables LiveKit to work within highly congested networks while delivering an acceptable user experience.

    Publisher-only mode

    When a participant connects without subscribe permission, server will use the publisher PeerConnection as the default #198

    Improved connection quality updates

    Connection quality updates now supports audio-only participants, with a MOS-style scoring.

    Other changes

    • bugfix: participant is always present when webhook is triggered #200
    • Room.numParticipant will reflect number of participants in a room #199
    • configurable limit for max number of tracks before a node marks itself unavailable #197
    • bugfix: send correct simulcast information in TrackInfo #218
    • docker image uses Go 1.17 #223
    • support for updated recorder protocol (to match livekit-recorder v0.3.12)
    • support for custom simulcast layers #238
    • cleaned up logging to include context #252
    • external TURN/TLS termination #168
    • improve video quality in simulcast layer selection #283
    Source code(tar.gz)
    Source code(zip)
  • v0.14.2(Nov 19, 2021)

    Bugfix release

    Lots of bugs squished and packed with improvements in the core SFU.

    • Improved health checks to avoid sending traffic to dead nodes #177 #183
    • Fixed compatibility with arm64 #178
    • For transceiver re-use, fixed retained frames from previous track #179
    • Fixed edge cases with forwarding incorrect picture id #180
    • Fixed deadlocks when multiple (>20) participants join at the exact same time #189 90f3c43dc583f5dbd244842bd3df6143e74deac7
    • Fixed connection quality updates not utilizing publisher loss stats #186
    • Fixed Room API breakage #190
    • Improved audio level indicator with Opus DTX #159
    • Supports both H.264 and VP8 by default, including mixing tracks from the same participant #128
    Source code(tar.gz)
    Source code(zip)
  • v0.14.0(Nov 5, 2021)

    Connection Quality Updates

    v0.14 introduces detection for connection quality that's performed on the server side. The SFU will gather various metrics such as packet loss, publish and subscribe success rates to determine the quality of the participant's connections. #167

    By performing this check on the server side, all LiveKit clients will receive quality information with minimal effort.

    Connection quality information is sent to the participant itself, as well as any other participants it's subscribed to.

    JS SDK v0.14.0 supports this feature, with Android, iOS, and Flutter to follow suit next week.

    Source code(tar.gz)
    Source code(zip)
  • v0.13.7(Nov 1, 2021)

    What's Changed

    • Fixes missing tracks when >3 participants join at the same time #163 (thanks to @bekriebel for reporting & verifying)

    Full Changelog:

    Source code(tar.gz)
    Source code(zip)
  • v0.13.5(Oct 20, 2021)

    Bugfix release

    • Update to pion v3.1.5, fixed handling of mixing simulcast/non-simulcast tracks 43079866a289bd8ca52cc26225958363e74ee711
    • Improve bandwidth estimation by sending abs-send-time #149
    • Correctly forward Track.Source #150
    Source code(tar.gz)
    Source code(zip)
  • v0.13.4(Oct 15, 2021)

  • v0.13.3(Oct 14, 2021)

    Region aware routing

    Introducing region-aware routing. When configured, LiveKit could route traffic to nodes that are closer to the end user. See multi region support #135 #141 (thanks @bekriebel)

    Recording revamp

    We've revamped our recording capabilities so that it's close to a GA release. Notable changes include RTMP simulcast support, and moving the pipelines to GStreamer from FFmpeg. Requires livekit-recorder v0.3.1 or higher #137

    Opus DTX support

    Opus DTX is enabled by default in this version, significantly reducing audio bandwidth utilization.

    Other change & bugfixes

    • Added routing metrics to be exposed via Prometheus #139
    • Enable Opus FEC with publisher in the room when subscribers are experiencing high loss #142
    • Transceiver re-use: with clients supporting protocol 4, livekit will re-use existing transceivers to avoid it ballooning. #145
    • Support for Source attribute in TrackInfo #146
    Source code(tar.gz)
    Source code(zip)
  • v0.13.1(Oct 5, 2021)

    • Fixed NACK handling when simulcast is enabled 797d2607c45e6986a74ef23c98ab26183687aa6a
    • Fixed client rejoining in single-node mode cdb04248fb0af7c167d7c168142597d5d831d524
    • Upgraded to Pion v3.1.1
    • Room Metadata support #126
    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(Sep 22, 2021)


    Protocol 3

    support for protocol 3, where subscriber connection becomes the primary one. This speeds up session establishment for participants that aren't publishing.

    Graceful termination

    When running in multi-node, server will now terminate gracefully, allowing remaining rooms on the node to drain. #116

    Other changes

    • Fixed mute/unmute on simulcasted tracks with less than 3 layers #114
    • Support incremental speaker updates #120
    • Webhooks for when recording is finished #125
    Source code(tar.gz)
    Source code(zip)
  • v0.12.5(Sep 6, 2021)

    Changes since v0.12.0

    • option to disable server-initiated mute/unmute (supported with JS SDK) #107
    • fixed mute/unmute loop when JS client changes mute states quickly #107
    • support for load aware node selection #94
    • support for sendData room API #88
    • windows development support #101
    • fixed panic when simulcasting low res video #102
    • recorder to use message bus #108
    • various interface updates #97 #103 #104 #105 #106
    Source code(tar.gz)
    Source code(zip)
  • v0.12.0(Aug 10, 2021)

    Feature release v0.12.0


    LiveKit can now notify your server when room events take place. See webhooks docs for configuration and details. The following server SDK versions include support for receiving webhooks:

    • server-sdk-go v0.6.0
    • server-sdk-js v0.5.1

    Recording support

    We've also included support for our upcoming recording feature. When released, it'll work with v0.12 and above.

    Source code(tar.gz)
    Source code(zip)
  • v0.11.4(Aug 1, 2021)

    Bugfix release


    • Default TLS port updated to match rfc5766 #68
    • STUN servers are sent to clients automatically #69
    • Preparing for recording mode #70
    • Fixed external IP discovery #72
    • Fixed case where subscriber could be placed on an unavailable layer upon joining b8e1cbe4f57ebbae6676d3c744f0ae0e3eb64965
    Source code(tar.gz)
    Source code(zip)
  • v0.11.1(Jul 23, 2021)


    • Fixed force_tcp flag, correctly suppress UDP candidates when enabled #62
    • Fixed participant actions with Room API in single node configuration #67
    • Fixed participants kicked out of the room sometimes when adaptive simulcast is used (f3a17a151f8641fe00851c2d047776f72d67677e)


    Huge shoutout to @hn8 for the contributions!

    • TURN/UDP support for improved connectivity #61
    • Updated logger, consistent field names #57 #60
    • Ability to have invisible participants (preparing for recorder) #65
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Jul 16, 2021)

    We are introducing a new feature with v0.11 that significantly improve LiveKit's handling of simulcast, particularly on the publisher side. (#51) With v0.11, publishers can now indicate which layers they are actively publishing. This enables the SFU to place subscribers on currently active layers. Publishers could stop publishing to a particular layer due to bandwidth or CPU constraints.

    JS SDK > v0.10.x supports adaptive publishing

    Also in this release:

    • Improved connection error messaging with validate API (
    • Fixed Safari compatibility for H.264 rooms (
    • Use protobuf for initial roomJoin message #52
    Source code(tar.gz)
    Source code(zip)
  • v0.10.6(Jul 13, 2021)


    • Fixes down track resync on unmute


    • Exposes /debug/rooms endpoint when running in dev mode, which displays room and participant state along with down track stats
    Source code(tar.gz)
    Source code(zip)
  • v0.10.5(Jul 11, 2021)


    • Fix glitch during layer switch with H.264 simulcast
    • Handle client reconnect after server has been restarted (#43)
    • Enhancements to active-speaker detection (#44)
    • Improves handling of Node IP in container environments (#48)


    • When ports are not explicitly configured, and --dev flag is used, single port mode will be used to make it easier to map ports via Docker.
    Source code(tar.gz)
    Source code(zip)
  • v0.10.4(Jul 7, 2021)


    • Use multi-port mode by default (#40)
    • Optimized SFU send-loop to fully utilize all CPUs
    • Embedded TURN/TLS for strict corporate firewalls.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.9(Jun 11, 2021)

    Bugfixes and performance tuning.


    • Fixed potential deadlock when subscribers leave
    • Fixed deadlock during ICE restart
    • Improved test reliability


    • Single port performance enhancement, increase read buffer and warns if misconfigured


    • Ability to send data packets to specific participants (contributed by @FeepsDev)
    Source code(tar.gz)
    Source code(zip)
  • v0.9.6(Jun 4, 2021)

  • v0.9.4(May 23, 2021)


    • Handles ICE Restart much better than before
    • Upstream ion-sfu v1.10.3
    • Actually enforce CanSubscribe permissions


    • Ability to disable auto-subscribe behavior
    Source code(tar.gz)
    Source code(zip)
  • v0.9.3(May 16, 2021)


    • exports stats via Prometheus (when prometheus_port is set)
    • supports client-initiated ICE restart, especially when mobile networks change.


    • use_external_ip is not true by default - you should set this for production deploys.
    • updated to ion-sfu v1.10.0, pion v3.0.29
    Source code(tar.gz)
    Source code(zip)
  • v0.9.0(May 7, 2021)

    Simplified data channel usage with DataChannel V2. LiveKit now provides the ability to publish data payloads to the room without needing to use data tracks. Each client opens two data channels with the server.

    • reliable channel - delivery guaranteed, much like TCP.
    • lossy - best effort delivery, optimizes for getting data to clients asap.

    Clients now have a simple interface to publish data to the room via LocalParticipant.publishData

    Data tracks that were in place before v0.9.0 are now deprecated.

    Source code(tar.gz)
    Source code(zip)
  • v0.8.5(Apr 28, 2021)

    This release includes significant improvements to connectivity, with single port mode being the default. With this release, LiveKit requires only three ports with no limit* to the number of connected clients.

    • Primary API port (HTTP/WS)
    • UDP data port
    • TCP data port

    It also includes a couple of features and bugfixes:

    • Handle client-driven ICE restart - when client network conditions have changed, the server handles when client PeerConnection reconnects without losing session state
    • Bugfixes to single port mode (race conditions)
    • Participant.JoinedAt - indicates when participant joined the room
    • DeleteRoom now immediately terminates all connected clients (via Leave responses)
    Source code(tar.gz)
    Source code(zip)
  • v0.8.1(Apr 16, 2021)

    This release contains significant improvements to stability and connectivity.


    • Single-port mode: accepts ICE traffic over a single UDP port, demuxing to different participants
    • TCP mode: support for ICE/TCP in situations that UDP isn't available (like VPN, firewalls)
    • Plan B support: initial support for clients that are plan-b based.


    • Fixed potential race condition during ICE candidate exchange

    NOTE: there had been a breaking change to the config file, specifically:

    • rtc.ice_tcp_port is now ice. tcp_port
    • rtc.udp_port should be used to activate single port mode. port_range_start and port_range_end are no longer needed
    Source code(tar.gz)
    Source code(zip)
Open source platform for real-time audio and video
A simple library to extract video and audio frames from media containers (based on libav).

Reisen A simple library to extract video and audio frames from media containers (based on libav, i.e. ffmpeg). Dependencies The library requires libav

NightGhost 41 Apr 29, 2022
Synthetic media is a realistic transformation of audio and video using artificial intelligence.

Synthetic media is a realistic transformation of audio and video using artificial intelligence.

null 1 Nov 20, 2021
Go-video-preview-ffmpeg-wrapper - A simple helper wrapper to generate small webm video previews using ffmpeg, useful for web previews.

Go-video-preview-ffmpeg-wrapper A simple helper wrapper to generate small webm video previews using ffmpeg, useful for web previews. Getting Started u

Robert van Alphen 0 Jan 5, 2022
Take control over your live stream video by running it yourself. Streaming + chat out of the box.

Take control over your content and stream it yourself. Explore the docs » View Demo · Use Our Server for Testing · FAQ · Report Bug Table of Contents

Owncast 5.9k May 14, 2022
rtsp to webrtc proxy with websocket signaling, currently limited to single h264 stream per endpoint

rtp-to-webrtc rtp-to-webrtc demonstrates how to consume a RTP stream video UDP, and then send to a WebRTC client. With this example we have pre-made G

Game On 3 Jan 18, 2022
Personal video streaming server.

tube This is a Golang project to build a self hosted "tube"-style video player for watching your own video collection over HTTP or hosting your own ch

davy wybiral 221 May 1, 2022
Short video direct link acquisition 短视频直连获取工具

Glink 短视频去水印一键解析应用 Short video direct link acquisition 短视频直连获取工具 Glink是一款基于go语言开发的短视频解析应用,前端使用vue+argon主题,后端使用go-fiber框架,支持web在线模式、客户端模式。

佰阅 113 May 5, 2022 is a video conferencing tool. is a video conferencing tool.

Bora Tanrıkulu 108 Apr 18, 2022
live video streaming server in golang

中文 Simple and efficient live broadcast server: Very simple to install and use; Pure Golang, high performance, and cross-platform; Supports commonly us

浩麟 7.9k May 13, 2022
Go4vl is Go library for working with the Video for Linux API (V4L2) natively, without any C bindings.

go4vl A Go library for working with the Video for Linux user API (V4L2). Gov4l hides all the complexities of working with V4L2 and exposes idiomatic G

Vladimir Vivien 72 May 13, 2022
Project to get Youtube video descriptions and search those videos as required

FamPayProject Project to get Youtube video descriptions and search those videos as required Prerequisities Postgres DB for persisting data Youtube Dat

null 0 Nov 5, 2021
Video converter with golang

Requirements Debian-like system (ubuntu, mint, etc...) with apt package manager Golang >1.15 Command tool make (use sudo apt install make -y to instal

Anaxita 1 Mar 5, 2022
golang function that download a video from youtube, and convert it to a mp3 file using ffmpeg

echedwnmp3 echedwnmp3 is a function that download a video from youtube, and convert it to a mp3 file using ffmpeg example package main import(echedwn

pai 4 Dec 7, 2021
lmmp3 is a little golang library that download a video from youtube, and convert it to a mp3 file using ffmpeg

lmmp3 lmmp3 is a function that download a video from youtube, and convert it to a mp3 file using ffmpeg You need to have installed ffmpeg in your syst

pai 8 Jan 11, 2022
ffcommander - An easy frontend to FFmpeg and Imagemagick to automatically process video and manipulate subtitles.

% FFCOMMANDER(1) ffcommander 2.39 % Mikael Hartzell (C) 2018 % 2021 Name ffcommander - An easy frontend to FFmpeg and Imagemagick to automatically pro

Mikael Hartzell 2 May 9, 2022
A go program that relies on back-end ffmpeg to process video-related content

Video Compress A go program that relies on back-end ffmpeg to process video-related content Installation v-go You can download the corresponding v-go

JokerHeyra 0 Dec 22, 2021
👾 Annie is a fast, simple and clean video downloader built with Go.

?? Annie is a fast, simple and clean video downloader built with Go. Installation Prerequisites Install via go install Homebrew (macOS only) Arch Linu

Rohan 0 Dec 27, 2021
SlideXtract - A tool to help extract slides from a video file.

SlideXtract A tool to help extract slides from a video file. Slides are output in the out folder. Features I didn't find any other piece of code that

Soumitra Shewale 1 Feb 13, 2022
Self-hosted web app for encoding files to a target format using distributed computing.

What is Encodarr? Encodarr is a self-hosted web application that encodes video files to a target format using distributed computing to spread the work

Brenek Harrison 27 May 9, 2022