SPIRE is a toolchain of APIs for establishing trust between software systems across a wide variety of hosting platforms

Related tags

Security spire
Overview

SPIRE Logo

CII Best Practices Build Status Coverage Status Go Report Card Slack Status

SPIRE (the SPIFFE Runtime Environment) is a toolchain of APIs for establishing trust between software systems across a wide variety of hosting platforms. SPIRE exposes the SPIFFE Workload API, which can attest running software systems and issue SPIFFE IDs and SVIDs to them. This in turn allows two workloads to establish trust between each other, for example by establishing an mTLS connection or by signing and verifying a JWT token. SPIRE can also enable workloads to securely authenticate to a secret store, a database, or a cloud provider service.

SPIRE is hosted by the Cloud Native Computing Foundation (CNCF) as an incubation-level project. If you are an organization that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF.

Get SPIRE

Learn about SPIRE

  • Before trying SPIRE, it's a good idea to learn about its architecture and design goals.
  • Once ready to get started, see the Quickstart Guides for Kubernetes, Linux, and MacOS.
  • There are several examples demonstrating SPIRE usage in the spire-examples and spire-tutorials repositories.
  • Check ADOPTERS.md for a list of production SPIRE adopters, a view of the ecosystem, and use cases.
  • See the SPIRE Roadmap for a list of planned features and enhancements.
  • Join the SPIFFE community on Slack. If you have any questions about how SPIRE works, or how to get it up and running, the best places to ask questions are the SPIFFE Slack channels.
  • Download the free book about SPIFFE and SPIRE, "Solving the Bottom Turtle."

Integrate with SPIRE

For supported integration versions, see Supported Integrations.

Contribute to SPIRE

The SPIFFE community maintains the SPIRE project. Information on the various SIGs and relevant standards can be found in https://github.com/spiffe/spiffe.

Further Reading

  • The Scaling SPIRE guide covers design guidelines, recommendations, and deployment models.
  • For an explanation of how SPIRE compares to related systems such as secret stores, identity providers, authorization policy engines and service meshes see comparisons.

Security

Security Assessments

A third party security firm (Cure53) completed a security audit of SPIFFE and SPIRE in February of 2021. Additionally, the CNCF Technical Advisory Group for Security conducted two assessments on SPIFFE and SPIRE in 2018 and 2020. Please find the reports and supporting material, including the threat model exercise results, below.

Reporting Security Vulnerabilities

If you've found a vulnerability or a potential vulnerability in SPIRE please let us know at [email protected]. We'll send a confirmation email to acknowledge your report, and we'll send an additional email when we've identified the issue positively or negatively.

Issues
  • Propose Updated Protos for the SPIRE Server API

    Propose Updated Protos for the SPIRE Server API

    The SPIRE registration API was originally written to allow manipulation of SPIRE registration entries. It has since become the de-facto API for all things administrative. This includes decidedly un-registration-y things such as generating join tokens, minting arbitrary SVIDs, agent management, and bundle management.

    One problem is simple naming - the "registration api" should not include things that are not related to registration. Another problem is the eventual access control that we want to provide... we currently have the "admin" bit you can set to give carte blanche access, however the scope is too wide. Finally, the registration API has grown organically over time to add features such as ListBy* etc... these features were added in a compatible manner, but we now have a chance to take a new approach to supporting this functionality in a more idiomatic way.

    This issue is done when we have loose consensus on the proto(s) for the next iteration of SPIRE management APIs.

    opened by evan2645 39
  • spire-server too high CPU usage

    spire-server too high CPU usage

    • Version: 1.2.0
    • Platform: Linux 5.13.0-30-generic - 20.04.1-Ubuntu SMP x86_64
    • Subsystem: server

    spire-server is running with no agent and without any load on it. CPU consumption is under 1% for ~50-60 minutes, then suddenly it starts consuming ~150-160% CPU and it persists until shutting down the spire-server. The same happens with all replicas. Different environments show different results but the CPU resource consumption is always jumps much higher after similar amount of time spent (~50-60 mins). I observed the above mentioned ~150-160% on a kind cluster, 95-100% on minikube, 100-110% on kvm/qemu and 45-50% on non-virtual k8s environment.

    I tried the same with spire-server 1.0.1, 1.1.0 and 1.2.0 images and got the same results. I also created a configuration without k8s-workload-registrar too, but did not help. Most probably it depends on the configuration I use since I have not managed to reproduce it with the reference configuration.

    Can you please tell me what is wrong in this configuration? What is the culprit and how can I fix it?

    Thanks in advance, Szilard

    opened by szvincze 36
  • Enhance AWS node attestor server plugin to validate EC2 instances across multiple AWS account.

    Enhance AWS node attestor server plugin to validate EC2 instances across multiple AWS account.

    We are trying to set up the spire infrastructure in our production environment. The production environment has multiple AWS accounts. We want to deploy a spire server in one of the accounts and attest the EC2 hosts in all other AWS accounts using AWS aws_iid node attestor.

    When both the spire server and the spire agent runs in the same AWS account, the spire server is able to attest to the EC2 host. However when spire server and spire agent are in different AWS account, the node attestation fails with this message: ERRO[0000] agent crashed error=“failed to get SVID: error getting attestation response from SPIRE server: rpc error: code = Internal desc = failed to attest: rpc error: code = Unknown desc = aws-iid: attempted attestation but an error occurred querying AWS via describe-instances: InvalidInstanceID.NotFound:

    opened by sushil-prasad 26
  • SPIRE Agent Windows Support

    SPIRE Agent Windows Support

    The SPIRE Agent should be extended to support attestation of Windows workloads. The current version of Windows supports UDS natively as of Windows 10 as of build 1809. @azdagron Has done a POC using named pipes.

    • What are the introspection capabilities through named pipes/windows UDS?
    • How much does support of Windows users before build 1809 matter?
    • User experience for OS-specific plugins, i.e. UNIX workload attestor.
    • Version: n/a
    • Platform: Windows
    • Subsystem: Agent
    opened by colek42 24
  • Implement upstream authority plugin for GCP

    Implement upstream authority plugin for GCP

    Signed-off-by: ramand.dragcp [email protected]

    Pull Request check list

    • [*] Commit conforms to CONTRIBUTING.md?
    • [*] Proper tests/regressions included?
    • [*] Documentation updated?

    Affected functionality Adds a new plugin for GCP Certificate Authority Service backed upstream authority

    Description of change Thank you for reviewing the PR #2039 by @Jonpez2. I am continuing that PR here. I believe I have incorporated all of the feedback

    A few notes:

    • Changes to go.sum and go.mod happened automatically as part of "make build"
    • We pick all the CAs in GCP CAS that are in enabled state and also matches the specified label key/value pair. The one with the earliest expiration is used to create and sign intermediate CAs
    • I have tried to included relevant documentation links for most of the APIs and types of GCP CAS.

    Which issue this PR fixes

    opened by dragcp 23
  • Safer DB migration strategy in spire server cluster env

    Safer DB migration strategy in spire server cluster env

    Currently if SPIRE Server with DB version X starts up and looks to DB with version < X, SPIRE will auto-migrate the DB to X. Backwards compatibility does not exist.

    Problems:

    • In a distributed environment this is fatal to the other SPIRE Servers. They will be in a < X DB version and fail to start due to not being able to migrate backwards
    • The migration is uncontrolled; in such an environment we not commit to every single Server must upgrade, and hope that the new DB schema is good

    Some (non-mutually exclusive) Solutions:

    • A flag to pass to Server config to not perform auto-migration; without other changes this is fatal to the newer server but lets the older ones continue operations
    • Proper semantic versioning of the DB versions; when starting up and the DB is at a later version than the Server expects, if the new version does not indicate a breaking change then the old Server should be able to continue operations (not perform backwards migration)
    • A method of rolling back the DB version
    • A method of a later Server being backwards compatible for older DB versions
    opened by amoore877 21
  • Implement an Informer strategy for k8s-workload-registrar

    Implement an Informer strategy for k8s-workload-registrar

    The registrar gains an option to use an informer instead of a webhook. In this mode, it watches the k8s API instead of listening for updates from a webhook.

    The controller code is extended so that when entries are added, any outdated entries for the same pod are removed. This means that label changes are now reflected in the registration entries.

    Update client-go version to kubernetes 1.15 to get past API changes. Regenerate mocks to match.

    opened by asuffield 20
  • [RFC] Registrations RBAC

    [RFC] Registrations RBAC

    Background: SPIRE implements the concept of “admin” workloads. A workload presenting an admin SVID is able to act broadly across the entire registration API. In an ecosystem where multiple registrants are creating/deleting registrations, it results in registrants being able to affect registrations created by other registrants. Therefore, we need a way to limit the access of registration APIs to subsets of SPIFFE IDs by the registrant. More background discussion can be found in the github issue ​#716​.

    Possible solutions:

    1. Plug-in-based approach (Registration authorizer): The registration authorizer plug-in would be invoked in the path of create, update and delete registration RPCs. This will allow plug-in to get details of requestor identity, action to be performed and subject of the action. It will return a boolean response denoting whether requestor identity is authorized to perform action on registration with specific SPIFFE ID present in the RPC request. We currently have a plug-in “Notifier” which gets invoked after each bundle update call. The registration authorizer plug-in would behave in a similar way. Since both of these plug-ins are intended to be used for very different purposes, I’m proposing to create a new one instead of overloading existing “Notifier” plug-in. This approach offers more flexibility as users can potentially implement their own authorization model depending on different use cases or can integrate with any other external authorization solution. By default we will have a Noop plug-in which will always return true to ensure backward compatibility.

    2. OPA-based config: In this approach, we make changes to SPIRE core to include OPA-based config which will define admin registration authorization. The basic implementation can be found ​here​. This approach would make authorization a first class citizen of SPIRE which brings both advantages and challenges. All the SPIRE users will get authorization by default but it means arriving at consensus on the basic permission model so that most customer use cases are addressed. Also, there will be challenges around compatibility when modifying the permission model while the feature is settling in the production environment.

    Sample Implementation of Registration Authorizer plug-in: A sample permission model for registration RPCs in the plug-in could look like:

    Actor​ ​:​ ​Role, 
    Role​ ​:​ ​[{
                  Actions: ​[], 
                  Resources: ​[], 
                  Namespaces: ​[],
    ​         }, 
             {
                  Actions: ​[], 
                  Resources: ​[], 
                  Namespaces: ​[],
    ​         }, 
    ​}]
    

    Example:

    spiffe:​//example.ca/foo/workload:​ ​foobarRegistrant, 
    foobarRegistrant: ​[​ ​{
       Actions: ​["CREATE",​ ​"UPDATE",​ ​"DELETE"]
       Resources: ​[“REGISTREATIONS”],
       Namespaces: ​["spiffe://example.ca/foo",​ ​"spiffe://example.ca/bar"]
    }]
    

    In the above example, the “​Actor​” is s​piffe://example.ca/foo/workload​ admin workload which is mapped to the “​Role​” of the registrant​. The role​ foobarRegistrant ​would allow “​Actions”​ like create, update and delete on specific namespaces. The “​Namespaces​” holds prefixes of SPIFFE IDs on which the role allows the above Actions. During each lookup, plug-in will look for a role and action and then verify the namespace of requested identity. The role could be shared with different actors. Namespaces​ are prefixes of the SPIFFE IDs. Resources​ are a set of RPCs which are authorizing a given role. (Initially we will just have registration RPCs. Adding this field so that in future we can possibly extend this to other RPCs) List of RPCs which will invoke the plug-in:

    • Create registration
    • Update registration
    • Delete registration

    The latency impact of plug-in to registration RPCs would be proportional to the number of actors(registrants) and roles. But assuming this can be performed in memory, overall RPC latency would not observe a sizable increase. As a first step we can get Noop plug-in as a default. The details of implementations of actual registration authorization plug-in can be discussed separately.

    Request For Comments: This proposal is intended to get feedback on the overall approach for authorizing admin workloads in the near future as this is becoming an increasingly essential feature in hardening the SPIRE ecosystem in the production.

    opened by prasadborole1 19
  • Add failOver state for datastore operations

    Add failOver state for datastore operations

    Background When Read replicas are enabled for SPIRE, all reads to GetNodeSelectors and ListRegistrationEntries datastore operations are directed to read replicas with the tolerateFlag set to true with some allowed data staleness. This feature helps with scaling the control plane horizontally by moving read heavy operations to read replicas.

    Problem We recently conducted few game day experiments with SPIRE, especially around availability of SPIRE with some downtime on SPIRE database. We observed certain availability bottlenecks with FetchX509SVID calls due to its dependency on primary database, which could potentially be mitigated with read replicas.

    A summary of the experiment is added below.

    Scenario: Brief outage of SPIRE DB write cluster Screen Shot 2020-04-06 at 4 44 03 PM Screen Shot 2020-04-06 at 4 44 20 PM Screen Shot 2020-04-06 at 7 42 42 PM Screen Shot 2020-04-06 at 7 42 58 PM

    The first pic shows the availability drop on FetchX509SVID success. The second pic shows the corresponding drop in database reads. The yellow line in the second graph represents the reads in write DB instance and the orange and green lines represent reads in read replicas.

    The second and third pic shows the availability drop of FetchAttestedNode and FetchBundle datastore operations during this experiment.

    FetchX509SVID internally calls FetchAttestedNode and FetchBundle datastore operations, both of which are not allowed to go through read replicas. AgentSVIDs are validated by fetching the attested node as a first step in FetchX509SVID by looking for the nodes in write DB. Hence during the event, the availability of FetchX509SVID API went down to zero since all calls to the write DB instance started failing.

    Proposed Solution To maintain availability of a critical API such as FetchX509SVID in the event of the primary DB outage, it is preferable to have some tolerated staleness to the datastore operations. To have the tolerated staleness propagated across all necessary datastore operations, we could define a global database boolean state flag within datastore modules such as readOnly/failOver (default: false) which would be used by most datastore read operations to direct all reads to read replicas when set to true. Clients can build their own circuit breaking mechanism at their end to switch to the failOver state in the event of a failure. 

    With a failOver logic like above, an outage to the control plane will just impact new nodes and workloads and the existing agents and workloads will have minimal impact and continued to be served by the read DB replicas.

    We could also discuss any other potential failOver alternatives to improve the fault tolerance and availability of FetchX509SVID API.

    opened by kirutthika 18
  • Set CN and DNS SANs in registration API

    Set CN and DNS SANs in registration API

    SPIFFE identities are going to be the primary identifier for applications in our network, but x509 certificates are useful for many other things.

    Most notably, a service may, in our environment, sit behind a load balancer (TCP/Layer 4). We'd like "simple" HTTPS clients to be compatible with them, so we want the ability to put our own DNS SANs on certificates. As well, we'd like to set subjects with information useful for identifying exactly which instance you're talking to.

    Given that I believe the Registration API is officially unstable, it seems like it ought to be low-risk to incorporate this in the API. The CLI could gate these features on an experimental option, like federation is. I think the maintenance burden of this feature is fairly low, and makes SPIRE much easier for us to adopt. We may be able to contribute it. I wanted to get feedback if this feature was acceptable first, though.

    opened by mcpherrinm 18
  • RFC: Delegated  Identity API implementation

    RFC: Delegated Identity API implementation

    This is a request for comments PR regarding the initial implementation of new Delegated Identity API.

    Two rpc are available with this new privileged API: SubscribeToX509SVIDs and SubscribeToX509Bundles.

    The SubscribeToX509SVIDs rpc allows a privileged client to get X509-SVIDs for a given workload. This method uses a directional gRPC stream.

    The request is used to subscribe a workload, and the response (stream from server to client) is used to send SVID updates (for the subscribed workload). The client subscription is based on the workload's selectors. X509-SVIDs for registration entries matching all the passed selectors are returned. Also, the federations trust domain are returned when is the case.

    The SubscribeToX509Bundles rpc streams get local and all federated bundles.

    It's possible to watch two recorded demo. Here is using rpc SubscribeToX509SVIDs and here is using rpc SubscribeToX509Bundles - I used the federation environment tutorial download here.

    We would love to receive any feedback from the community! Special thanks to Mauricio for the major work.

    ref #2246

    Co-authored-by: Raphael Campos [email protected] Co-authored-by: Thiago Navarro [email protected] Signed-off-by: Mauricio Vásquez [email protected]

    opened by rscampos 17
  • Windows support: review permissions in directories created

    Windows support: review permissions in directories created

    SPIRE creates some directories that store sensitive data, with certain permissions to restrict access (e.g. agent and server data directories). On Windows, those directories are created with the CreateDirectory function using a NULL security descriptor. As a result, the directory gets a default security descriptor and the ACLs are inherited from its parent directory.

    This means that the end user must take into account the ACLs of the directory where SPIRE is deployed, to properly secure sensitive data stored by SPIRE.

    I think that a better security posture (considering that we strive to keep a "secure by default" policy) would be to set a security descriptor that would restrict access to the creator owner only in those directories.

    priority/backlog 
    opened by amartinezfayo 0
  • Windows support: directories for UDS endpoints are created

    Windows support: directories for UDS endpoints are created

    On posix platforms, we create directories for Unix Domain Sockets endpoints. In case of Windows, named pipes are used instead of Unix Domain Sockets, but we still create the directories. Prevent the creation of those directories on Windows.

    priority/urgent 
    opened by amartinezfayo 0
  • SpiffeID CRD dnsNames no longer populated in SAN

    SpiffeID CRD dnsNames no longer populated in SAN

    After updating from 1.3.0 to 1.3.1, I'm seeing that the dnsNames defined in a workload's SpiffeID CRD are no longer being populated in the certificate's X509v3 Subject Alternative Name.

    SpiffeID CRD:

    spec:
      dnsNames:
      - my-workload-75f5b67547-mnz29
      - my-workload.my-namespace.svc
      spiffeId: spiffe://example.org/ns/my-namespace/sa/my-workload 
    

    SAN when running with 1.3.0:

    X509v3 Subject Alternative Name:
                    DNS:my-workload-75f5b67547-mnz29, DNS:my-workload.my-namespace.svc, URI:spiffe://example.org/ns/my-namespace/sa/my-workload
    

    SAN when running with 1.3.1:

    X509v3 Subject Alternative Name:
                    URI:spiffe://example.org/ns/my-namespace/sa/my-workload
    
    triage/in-progress 
    opened by sjberman 2
  • Incompatibility WorkloadAttestation k8s and Kubeedge k8s

    Incompatibility WorkloadAttestation k8s and Kubeedge k8s

    We have a kubernetes cluster, which includes nodes, that are connected via kubeedge (https://kubeedge.io/) and run cri-o. In this system we are using spire and also the kubeedge nodes run spire agents.

    Sadly the workloadattestor k8s was incompatible with our system, but we managed to patch it temporarily with spire v1.0.0. The problem is that /proc/{PID}/cgroup looks different on our system, than on a "normal" k8s node.

    "Normal" k8s node:

    12:hugetlb:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169fbc47_41d0_4df5_be62_6b231010b6c4.slice/docker-bbf76635888294204963d9e32e4fae57dcbf8eddb7adeecbd55672cd5fbc2249.scope
    11:freezer:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169fbc47_41d0_4df5_be62_6b231010b6c4.slice/docker-bbf76635888294204963d9e32e4fae57dcbf8eddb7adeecbd55672cd5fbc2249.scope
    10:net_cls,net_prio:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169fbc47_41d0_4df5_be62_6b231010b6c4.slice/docker-bbf76635888294204963d9e32e4fae57dcbf8eddb7adeecbd55672cd5fbc2249.scope
    9:perf_event:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169fbc47_41d0_4df5_be62_6b231010b6c4.slice/docker-bbf76635888294204963d9e32e4fae57dcbf8eddb7adeecbd55672cd5fbc2249.scope
    8:memory:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169fbc47_41d0_4df5_be62_6b231010b6c4.slice/docker-bbf76635888294204963d9e32e4fae57dcbf8eddb7adeecbd55672cd5fbc2249.scope
    7:blkio:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169fbc47_41d0_4df5_be62_6b231010b6c4.slice/docker-bbf76635888294204963d9e32e4fae57dcbf8eddb7adeecbd55672cd5fbc2249.scope
    6:pids:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169fbc47_41d0_4df5_be62_6b231010b6c4.slice/docker-bbf76635888294204963d9e32e4fae57dcbf8eddb7adeecbd55672cd5fbc2249.scope
    5:cpu,cpuacct:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169fbc47_41d0_4df5_be62_6b231010b6c4.slice/docker-bbf76635888294204963d9e32e4fae57dcbf8eddb7adeecbd55672cd5fbc2249.scope
    4:rdma:/
    3:cpuset:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169fbc47_41d0_4df5_be62_6b231010b6c4.slice/docker-bbf76635888294204963d9e32e4fae57dcbf8eddb7adeecbd55672cd5fbc2249.scope
    2:devices:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169fbc47_41d0_4df5_be62_6b231010b6c4.slice/docker-bbf76635888294204963d9e32e4fae57dcbf8eddb7adeecbd55672cd5fbc2249.scope
    1:name=systemd:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169fbc47_41d0_4df5_be62_6b231010b6c4.slice/docker-bbf76635888294204963d9e32e4fae57dcbf8eddb7adeecbd55672cd5fbc2249.scope
    0::/
    

    our edge node:

    0::/../crio-45490e76e0878aaa4d9808f7d2eefba37f093c3efbba9838b6d8ab804d9bd814.scope
    

    As said, patching the workloadattestor in spire 1.0.0 was possible by adapting the regex extracting the container id. But since https://github.com/spiffe/spire/pull/2600 v1.1.1, the algorithm also checks for PodUID. As this information is not available in the cgroup of our system, patching is no more possible.

    A solution inside spire is highly appreciated. But hints on how to configure our edge node could also help.

    • Version: v1.2.3
    • Platform: Linux 1bdf700d-f0dd-4eb4-89eb-47eb8acfd704 5.18.4-1-default #1 SMP PREEMPT_DYNAMIC Wed Jun 15 06:00:33 UTC 2022 (ed6345d) x86_64 x86_64 x86_64 GNU/Linux, MicroOS Opensuse, cri-o, kubeedge
    • Subsystem: workloadattestor k8s
    help wanted priority/backlog 
    opened by goergch 1
  • Implement LRU cache for storing SVIDs in SPIRE Agent

    Implement LRU cache for storing SVIDs in SPIRE Agent

    Updating SPIRE agent SVID cache to be LRU cache. This cache has experimental config fields like MaxSvidCacheSize and SVIDCacheExpiryPeriod.

    1. Size limit of SVID cache is a soft limit which means if SVID has a subscriber present then that SVID is never removed from cache.
    2. Least recently used SVIDs are removed from cache only after the cache expiry period has passed. This is done to reduce the overall cache churn.
    3. Last access timestamp for SVID cache entry is updated when a new subscriber is created.
    4. When a new subscriber is created and if there is a cache miss then subscriber needs to wait for next SVID sync event to receive WorkloadUpdate with newly minted SVID

    More context: #2940

    Testing:

    In addition to new integration test, also tested locally with 9k registrations per agent and validated:

    1. SVIDs are properly getting cached as per subscriber selector set
    2. SVIDs are getting removed after expiry
    3. Tested with workload associated with 8k entries, cli command fails with expected error codes DeadlineExceeded or ResourceExhausted due to huge response size. But validated that SVIDs cache is properly getting constructed.
    opened by prasadborole1 1
  • Performance Issue with excessive Get Bundle Requests

    Performance Issue with excessive Get Bundle Requests

    • Version: N/A
    • Platform: N/A
    • Subsystem: server

    Hello,

    We observed some performance issues during failover testing when spire server received a large number of GetBundle requests. The performance hit comes from an excessive amount of pulls from the underlying datastore to fetch the trust bundle. I was wondering what the community's opinion is, or whether this discussion has been had on adding an additional caching layer over the datastore for retrieving trust bundles.

    Additionally, I've noticed in https://github.com/spiffe/spire/blob/main/pkg/server/cache/dscache/cache.go that there is some logic already implemented for a datastore caching mechanism for fetching/updating trust bundles. I am curious as to how that is utilized in a deployment setting.

    Would love to hear your thoughts and opinions, thanks!

    triage/in-progress 
    opened by patricka3125 1
Releases(v1.3.1)
Owner
SPIFFE
Secure Production Identity Framework For Everyone
SPIFFE
Scans and catches callbacks of systems that are impacted by Log4J Log4Shell vulnerability across specific headers.

Log4ShellScanner Scans and catches callbacks of systems that are impacted by Log4J Log4Shell vulnerability across specific headers. Very Beta Warning!

null 56 Jun 17, 2022
Obfuscate Go code by wrapping the Go toolchain

Obfuscate Go code by wrapping the Go toolchain.

null 1.8k Jun 27, 2022
Secret management toolchain

Harp TL;DR. Why harp? Use cases How does it work? Like a Data pipeline but for secret Immutable transformation What can I do? FAQ License Homebrew ins

elastic 131 Jun 14, 2022
zero-trust remote firewall instrumentation

ShieldWall embraces the zero-trust principle and instruments your server firewall to block inbound connections from every IP on any port, by default.

Simone Margaritelli 171 Jun 9, 2022
Update-java-ca-certificates - Small utility to convert the system trust store to a system Java KeyStore

update-java-ca-certificates This small utility takes care of creating a system-w

Swisscom 3 Jan 7, 2022
Find secrets and passwords in container images and file systems

Find secrets and passwords in container images and file systems

null 1.3k Jun 21, 2022
Scanner to send specially crafted requests and catch callbacks of systems that are impacted by Log4J Log4Shell vulnerability (CVE-2021-44228)

scan4log4shell Scanner to send specially crafted requests and catch callbacks of systems that are impacted by Log4J Log4Shell vulnerability CVE-2021-4

Frank Hübner 11 Feb 27, 2022
Volana - Shell command obfuscation to avoid detection systems

volana (moon in malagasy) { Use it ; ??(hide from); ??(detected by) } Shell comm

Ariary 38 Jun 20, 2022
Scan systems and docker images for potential spring4shell vulnerabilities.

Scan systems and docker images for potential spring4shell vulnerabilities. Will detect in-depth (layered archives jar/zip/tar/war and scans for vulnerable Spring4shell versions. Binaries for Windows, Linux and OsX, but can be build on each platform supported by supported Golang.

null 11 May 6, 2022
Implementations of the Coconut signing scheme, cross-compatible between Rust and Go.

Coconut Coconut [paper] is a distributed cryptographic signing scheme providing a high degree of privacy for its users. You can find an overview of ho

Nym 19 May 22, 2022
Static binary analysis tool to compute shared strings references between binaries and output in JSON, YAML and YARA

StrTwins StrTwins is a binary analysis tool, powered by radare, that is capable to find shared code string references between executables and output i

Anderson 2 May 3, 2022
Gorsair hacks its way into remote docker containers that expose their APIs

Gorsair Gorsair is a penetration testing tool for discovering and remotely accessing Docker APIs from vulnerable Docker containers. Once it has access

Brendan Le Glaunec 774 Jun 15, 2022
Secure software enclave for storage of sensitive information in memory.

MemGuard Software enclave for storage of sensitive information in memory. This package attempts to reduce the likelihood of sensitive data being expos

Awn 2.2k Jun 24, 2022
A software supply chain security inspection tool.

README.md murphysec 一款专注于软件供应链安全的开源工具,包含开源组件依赖分析、漏洞检测及漏洞修复等功能。 安装 macOS 使用Homebrew安装 // TODO Windows 使用scoop安装 scoop bucket add murphysec https://gith

murphysec 38 Feb 20, 2022
Bhojpur Consulting 0 Jan 1, 2022
This is a "simple" game server. Main functionalities are matching and establishing a connection between players

Game Server This is a "simple" game server. Main functionalities are matching and establishing a connection between players How to Run? run the server

eco 7 Feb 6, 2022
Use AWS SQS as a clipboard to copy and paste across different systems and platforms

sqs_clipboard Use AWS SQS as a clipboard to copy and paste across different systems and platforms. Clipboard contents are encrypted in transit and at

John Taylor 20 May 9, 2022
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.

Themis provides strong, usable cryptography for busy people General purpose cryptographic library for storage and messaging for iOS (Swift, Obj-C), An

Cossack Labs 1.5k Jun 15, 2022
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.

Themis provides strong, usable cryptography for busy people General purpose cryptographic library for storage and messaging for iOS (Swift, Obj-C), An

Cossack Labs 1.5k Jun 20, 2022
Put a web archive (WARC) on an S3 bucket suitable for hosting with S3 Website Hosting.

warc-to-s3 This is a small Go application that consumes a WARC file ( using slyzrc/warc) and puts it on S3 suitable for serving with S3 Website Hostin

Ian Dees 5 Jan 6, 2022
Provides agent and server plugins for SPIRE to allow Tailscale node attestation.

SPIRE Tailscale Plugin ⚠️ this node attestation plugin relies on a Tailscale OIDC id-token feature, which is marked as Work-in-Progress and may not be

Johan Siebens 9 May 22, 2022
Discover internet-wide misconfigurations while drinking coffee

netz ?? ?? The purpose of this project is to discover an internet-wide misconfiguration of network components like web-servers/databases/cache-service

null 330 Jun 13, 2022
Network-wide ads & trackers blocking DNS server

Privacy protection center for you and your devices Free and open source, powerful network-wide ads & trackers blocking DNS server. AdGuard.com | Wiki

AdGuard 11.9k Jun 21, 2022
Free and open source, powerful network-wide ads & trackers blocking DNS server

Privacy protection center for you and your devices Free and open source, powerful network-wide ads & trackers blocking DNS server. AdGuard.com | Wiki

Lupael 1 Nov 20, 2021
Scans and catches callbacks of systems that are impacted by Log4J Log4Shell vulnerability across specific headers.

Log4ShellScanner Scans and catches callbacks of systems that are impacted by Log4J Log4Shell vulnerability across specific headers. Very Beta Warning!

null 56 Jun 17, 2022
Mattermost is an open source platform for secure collaboration across the entire software development lifecycle.

Mattermost is an open source platform for secure collaboration across the entire software development lifecycle. This repo is the primary source for c

Mattermost 23.1k Jun 20, 2022
Obfuscate Go code by wrapping the Go toolchain

Obfuscate Go code by wrapping the Go toolchain.

null 1.8k Jun 27, 2022
Stackie enables developers to configure their local environment/toolchain with ease.

Stackie enables developers to configure their local environment/toolchain with ease. Made for Pulumi CLI, Google Cloud Platform (gcloud), and Amazon Web Services (aws-cli).

Bjerk AS 6 Sep 10, 2021
A smart contract development toolchain for Go

ethgen - A smart contract development toolchain for Go A simple yet powerful toolchain for Go based smart contract development Compile solidity contra

Tally 33 May 27, 2022