Connect, secure, control, and observe services.

Overview

Istio

Go Report Card GoDoc

Istio logo

An open platform to connect, manage, and secure microservices.

  • For in-depth information about how to use Istio, visit istio.io
  • To ask questions and get assistance from our community, visit discuss.istio.io
  • To learn how to participate in our overall community, visit our community page

In this README:

In addition, here are some other documents you may wish to read:

You'll find many other useful documents on our Wiki.

Introduction

Istio is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes.

Istio is composed of these components:

  • Envoy - Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.

    Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.

  • Istiod - The Istio control plane. It provides service discovery, configuration and certificate management. It consists of the following sub-components:

    • Pilot - Responsible for configuring the proxies at runtime.

    • Citadel - Responsible for certificate issuance and rotation.

    • Galley - Responsible for validating, ingesting, aggregating, transforming and distributing config within Istio.

  • Operator - The component provides user friendly options to operate the Istio service mesh.

Repositories

The Istio project is divided across a few GitHub repositories:

  • istio/api. This repository defines component-level APIs and common configuration formats for the Istio platform.

  • istio/community. This repository contains information on the Istio community, including the various documents that govern the Istio open source project.

  • istio/istio. This is the main code repository. It hosts Istio's core components, install artifacts, and sample programs. It includes:

    • istioctl. This directory contains code for the istioctl command line utility.

    • operator. This directory contains code for the Istio Operator.

    • pilot. This directory contains platform-specific code to populate the abstract service model, dynamically reconfigure the proxies when the application topology changes, as well as translate routing rules into proxy specific configuration.

    • security. This directory contains security related code, including Citadel (acting as Certificate Authority), citadel agent, etc.

  • istio/proxy. The Istio proxy contains extensions to the Envoy proxy (in the form of Envoy filters) that support authentication, authorization, and telemetry collection.

Issue management

We use GitHub to track all of our bugs and feature requests. Each issue we track has a variety of metadata:

  • Epic. An epic represents a feature area for Istio as a whole. Epics are fairly broad in scope and are basically product-level things. Each issue is ultimately part of an epic.

  • Milestone. Each issue is assigned a milestone. This is 0.1, 0.2, ..., or 'Nebulous Future'. The milestone indicates when we think the issue should get addressed.

  • Priority. Each issue has a priority which is represented by the column in the Prioritization project. Priority can be one of P0, P1, P2, or >P2. The priority indicates how important it is to address the issue within the milestone. P0 says that the milestone cannot be considered achieved if the issue isn't resolved.

Issues
  • How to assign Multiple label to vm?

    How to assign Multiple label to vm?

    (This is used to request new product features, please visit https://discuss.istio.io for questions on using Istio)

    Describe the feature request assign multiple label to vm

    Affected product area (please put an X in all that apply)

    [ ] Docs [X] Installation [ ] Networking [ ] Performance and Scalability [ ] Extensions and Telemetry [ ] Security [ ] Test and Release [ ] User Experience [ ] Developer Infrastructure

    Affected features (please put an X in all that apply)

    [ ] Multi Cluster [x] Virtual Machine [ ] Multi Control Plane

    Additional context

    docs: https://istio.io/latest/docs/setup/install/virtual-machine/

    Automatic generation configure file

    cluster.env

    CANONICAL_REVISION='latest'
    CANONICAL_SERVICE='vm1app1'
    ISTIO_INBOUND_PORTS='*'
    ISTIO_LOCAL_EXCLUDE_PORTS='15090,15021,15020'
    ISTIO_METAJSON_LABELS='{"app":"vm1app1","service.istio.io/canonical-name":"vm1app1","service.istio.io/canonical-version":"latest"}'
    ISTIO_META_AUTO_REGISTER_GROUP='vm1app1'
    ISTIO_META_CLUSTER_ID='Kubernetes'
    ISTIO_META_DNS_CAPTURE='true'
    ISTIO_META_MESH_ID=''
    ISTIO_META_NETWORK=''
    ISTIO_META_WORKLOAD_NAME='vm1app1'
    ISTIO_NAMESPACE='vm1'
    ISTIO_SERVICE='vm1app1.vm1'
    ISTIO_SERVICE_CIDR='*'
    POD_NAMESPACE='vm1'
    SERVICE_ACCOUNT='vm1-sa'
    TRUST_DOMAIN='cluster.local'
    

    mesh.yaml

    defaultConfig:
      discoveryAddress: istiod.istio-system.svc:15012
      envoyAccessLogService:
        address: skywalking-oap.istio-system:11800
      proxyMetadata:
        CANONICAL_REVISION: latest
        CANONICAL_SERVICE: vm1app1
        ISTIO_META_AUTO_REGISTER_GROUP: vm1app1
        ISTIO_META_CLUSTER_ID: Kubernetes
        ISTIO_META_DNS_CAPTURE: "true"
        ISTIO_META_MESH_ID: ""
        ISTIO_META_NETWORK: ""
        ISTIO_META_WORKLOAD_NAME: vm1app1
        ISTIO_METAJSON_LABELS: '{"app":"vm1app1","service.istio.io/canonical-name":"vm1app1","service.istio.io/canonical-version":"latest"}'
        POD_NAMESPACE: vm1
        SERVICE_ACCOUNT: vm1-sa
        TRUST_DOMAIN: cluster.local
      tracing:
        zipkin:
          address: zipkin.istio-system:9411
    
    
    kind/enhancement area/environments feature/Virtual-machine 
    opened by cfanbo 0
  • istioctl install fails when adding gateway service annotation that includes forward slash

    istioctl install fails when adding gateway service annotation that includes forward slash

    Bug Description

    I need to add service.beta.kubernetes.io/azure-load-balancer-resource-group service annotation to the gateway ingress but receive the following error:

    ✔ Istio core installed
    ✔ Istiod installed
    - Processing resources for Ingress gateways.                                                                                                                                                2021-09-22T23:21:49.577898Z      error   installer       failed to update resource with server-side apply for obj Service/istio-system/istio-ingressgateway: Service "istio-ingressgateway" is invalid: metadata.annotations: Invalid value: "service.beta.kubernetes.io\\azure-load-balancer-resource-group": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName',  or 'my.name',  or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')
    ✘ Ingress gateways encountered an error: failed to update resource with server-side apply for obj Service/istio-system/istio-ingressgateway: Service "istio-ingressgateway" is invalid: metadata.annotations: Invalid value: "service.beta.kubernetes.io\\azure-load-balancer-resource-group": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName',  or 'my.name',  or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')
    - Pruning removed resources                                                                                                                                                                 Error: failed to install manifests: errors occurred during operation
    

    I have tried multiple variations to determine the issue.

    This doesn't work:

    istioctl install `
        --set profile=default `
        --set "values.gateways.istio-ingressgateway.serviceAnnotations.service\.beta\.kubernetes\.io/azure-load-balancer-resource-group=test1" `
    

    If I remove azure-load-balancer-resource-group and forward slash I don't receive the error:

    istioctl install `
        --set profile=default `
        --set "values.gateways.istio-ingressgateway.serviceAnnotations.service\.beta\.kubernetes\.io=test1" `
        --skip-confirmation
    

    I even tried using the annotation from istioctl install --help

    istioctl install `
        --set profile=default `
        --set "values.gateways.istio-ingressgateway.serviceAnnotations.container\.apparmor\.security\.beta\.kubernetes\.io/istio-proxy=runtime/default" `
        --skip-confirmation
    

    which results in error:

    ✔ Istio core installed
    ✔ Istiod installed
    - Processing resources for Ingress gateways.                                                                                                                                                2021-09-22T23:35:32.529061Z      error   installer       failed to update resource with server-side apply for obj Service/istio-system/istio-ingressgateway: Service "istio-ingressgateway" is invalid: metadata.annotations: Invalid value: "container.apparmor.security.beta.kubernetes.io\\istio-proxy": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName',  or 'my.name',  or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')
    ✘ Ingress gateways encountered an error: failed to update resource with server-side apply for obj Service/istio-system/istio-ingressgateway: Service "istio-ingressgateway" is invalid: metadata.annotations: Invalid value: "container.apparmor.security.beta.kubernetes.io\\istio-proxy": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName',  or 'my.name',  or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')
    - Pruning removed resources                                                                                                                                                                 Error: failed to install manifests: errors occurred during operation
    

    If I use manifest generate command the generated k8s resources appear to be generated correctly:

    
    istioctl profile dump default > "./operator.yaml"
    
    istioctl manifest generate `
        -f "./operator.yaml" `
        --set "components.ingressGateways[0].k8s.service.loadBalancerIP=1.2.3.4" `
        --set "components.ingressGateways[0].k8s.serviceAnnotations.service\.beta\.kubernetes\.io/azure-load-balancer-resource-group=rg-prod-wus2" 
    

    A side question...Why does istioctl rewrite the annotation with a back-slash instead of a forward slash?

    Version

    client version: 1.11.2
    control plane version: 1.11.2
    data plane version: 1.11.2 (1 proxies)
    
    Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"windows/amd64"}
    Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"0b17c6315e806a66d507e77760a5d60ab5cccfd8", GitTreeState:"clean", BuildDate:"2021-08-30T01:42:22Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
    

    Additional Information

    No response

    kind/docs area/environments 
    opened by dahovey 1
  • Remove ClusterLocal structure from Service

    Remove ClusterLocal structure from Service

    This was originally added to support a second set of host/vips for Kubernetes Multi-Cluster Services (MCS). We've since gone with a different approach and this is no longer needed.

    Moving the hostname and cluster VIPs to the top-level in Service.

    Please provide a description of this PR:

    area/networking size/XXL cla: yes release-notes-none 
    opened by nmittler 2
  • Only apply Istio configs to config clusters

    Only apply Istio configs to config clusters

    Please provide a description of this PR:

    Fixes #35050

    size/S cla: yes release-notes-none 
    opened by ericvn 0
  • Clean-up VM telemetry tests to remove customization of environment

    Clean-up VM telemetry tests to remove customization of environment

    As discussed in https://github.com/istio/istio/pull/35318#issuecomment-925196804, there is some redundancy in the test environments that may be leading to artificial results in testing. We should remove the customizations and validate that VM setup for telemetry works as expected with the workload suite of commands.

    [ X ] Extensions and Telemetry [ X ] Test and Release [ X ] Virtual Machine

    area/test and release area/extensions and telemetry feature/Virtual-machine 
    opened by douglas-reid 0
  • fix canonical revision label for workload groups

    fix canonical revision label for workload groups

    An incorrect label was used in the workload command, leading to incorrectly labeled WorkloadGroups. This PR corrects the issue, first identified in #34395.

    • [ X ] Policies and Telemetry
    size/S cla: yes 
    opened by douglas-reid 6
  • Explicit protocol selection breaks DS HTTP egress

    Explicit protocol selection breaks DS HTTP egress

    Bug Description

    We are currently testing a dual-stack solution for Istio, loosely based on istio/istio#29076, ie by using :: ipv4_compat listeners serving both IP families for any/all wildcards (0.0.0.0 or ::). For testing we rebuilt pilot-agent with (only) the istio/istio#35310 changes to achieve IPv6 traffic interception and plugged it in to the official istio/proxyv2:1.11.2 image.

    In the 1.10 release this had already been working quite well. Now trying to resume the testing on 1.11, one direct HTTP egress case has started failing when/if explicit protocol selection is used (ie using name=http-port or appProtocol=http in the k8s service). Requests that would previously route to the correct endpoint now suddenly end up in BlackHoleCluster (using REGISTRY_ONLY setting, otherwise Passthrough). What is more, when running an ip6tables-enabled 1.10-based client in the 1.11 control plane, it still manages to connect to the external HTTP service. Removing the port name/appProtocol fields from the k8s service immediately restores connectivity to the external service on the 1.11-based client.

    It appears that enabling explicit protocol selection removes the HttpProtocolOptions setting from the outbound service clusters and replaces the separate <SVC_IPv4>_80 / <SVC_IPv6>_80 listeners by a singular wildcard 0.0.0.0_80 listener. But this behavior does not seem to be new and had previously worked when modifying the listener to :: ipv4_compat, as described.

    Version

    $ istioctl version
    client version: 1.11.2
    control plane version: 1.11.2
    data plane version: 1.10.4 (1 proxies), 1.11.2 (1 proxies)
    $ kubectl version --short
    Client Version: v1.18.2
    Server Version: v1.21.2
    

    Additional Information

    DS cluster Set-up using kind and the following config file:

    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    networking:
      ipFamily: dual
    

    Istio Deployed using helm:

    $ helm install --namespace istio-system --create-namespace istio-base manifests/charts/base
    $ helm install --namespace istio-system istiod manifests/charts/istio-control/istio-discovery --set meshConfig.accessLogFile=/dev/stdout --set meshConfig.outboundTrafficPolicy.mode=REGISTRY_ONLY
    

    Test resources EnvoyFilter replacing the outbound 15001 listener (minimal solution, but we have actually replaced ALL wildcards with same results)

    apiVersion: networking.istio.io/v1alpha3
    kind: EnvoyFilter
    metadata:
      name: sidecar-outbound-ipv4compat-listener
    spec:
      configPatches:
      - applyTo: LISTENER
        match:
          context: SIDECAR_OUTBOUND
          listener:
            portNumber: 15001
        patch:
          operation: MERGE
          value:
            address:
              socket_address:
                address: "::"
                ipv4_compat: true
    

    Service, separately exposed via IPv4 and IPv6:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: httpbin
      name: httpbin
    spec:
      selector:
        matchLabels:
          app: httpbin
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "false"
          labels:
            app: httpbin
            sidecar.istio.io/inject: "false"
        spec:
          containers:
          - args:
            - -b
            - '[::]:80'
            - --access-logfile
            - '-'
            - httpbin:app
            command:
            - gunicorn
            image: kennethreitz/httpbin
            imagePullPolicy: Always
            name: httpbin
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: httpbin4
      name: httpbin4
    spec:
      ipFamilies:
      - IPv4
      ipFamilyPolicy: SingleStack
      ports:
      - name: http-port ## this works fine in the current test
        port: 80
        protocol: TCP
        targetPort: 80
      selector:
        app: httpbin
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: httpbin6
      name: httpbin6
    spec:
      ipFamilies:
      - IPv6
      ipFamilyPolicy: SingleStack
      ports:
      - name: http-port ## this causes the problem
        port: 80
        protocol: TCP
        targetPort: 80
      selector:
        app: httpbin
    

    ip6tables-enabled clients (1.10 and 1.11):

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: client
      name: client
    spec:
      selector:
        matchLabels:
          app: client
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "true"
            sidecar.istio.io/proxyImage: <PRIVATE_REGISTRY>/proxyv2:1.11.2-h9554646
          labels:
            app: client
            sidecar.istio.io/inject: "true"
        spec:
          containers:
          - args:
            - 3650d
            command:
            - sleep
            image: curlimages/curl
            imagePullPolicy: Always
            name: curl
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: clientold
      name: clientold
    spec:
      selector:
        matchLabels:
          app: clientold
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "true"
            sidecar.istio.io/proxyImage: <PRIVATE_REGISTRY>/proxyv2:1.10.4-h50704d3
          labels:
            app: clientold
            sidecar.istio.io/inject: "true"
        spec:
          containers:
          - args:
            - 3650d
            command:
            - sleep
            image: curlimages/curl
            imagePullPolicy: Always
            name: curl
    

    Reproduction

    NAMESPACE=test
    CLIENT110=$(k get po -n "${NAMESPACE}" -l app=clientold -o name)
    CLIENT111=$(k get po -n "${NAMESPACE}" -l app=client -o name)
    
    k logs -n "${NAMESPACE}" "${CLIENT111}" -c istio-proxy --since=1s -f &
    k logs -n "${NAMESPACE}" "${CLIENT110}" -c istio-proxy --since=1s -f &
    
    k exec -n "${NAMESPACE}" "${CLIENT111}" -c curl -- curl -s http://httpbin4/status/418
        -=[ teapot ]=-
    ...
    [2021-09-22T15:31:35.170Z] "GET /status/418 HTTP/1.1" 418 - via_upstream - "-" 0 135 3 2 "-" "curl/7.78.0-DEV" "a7f4fd3f-7fa4-4530-aaf8-ad535728d38e" "httpbin4" "10.244.0.10:80" outbound|80||httpbin4.test.svc.cluster.local 10.244.0.18:40414 10.96.187.16:80 10.244.0.18:43840 - default
    k exec -n "${NAMESPACE}" "${CLIENT111}" -c curl -- curl -s http://httpbin6/status/418
    command terminated with exit code 56
    2021-09-22T15:57:40.796066Z	debug	envoy filter	original_dst: New connection accepted
    2021-09-22T15:57:40.796182Z	debug	envoy filter	[C8339] new tcp proxy session
    2021-09-22T15:57:40.796222Z	debug	envoy filter	[C8339] Creating connection to cluster BlackHoleCluster
    [2021-09-22T15:57:40.796Z] "- - -" 0 UH - - "-" 0 0 0 - "-" "-" "-" "-" "-" BlackHoleCluster - [fd00:10:96::c6a5]:80 [fd00:10:244::12]:35630 - -
    
    k exec -n "${NAMESPACE}" "${CLIENT110}" -c curl -- curl -s http://httpbin4/status/418
        -=[ teapot ]=-
    ...
    [2021-09-22T15:33:42.301Z] "GET /status/418 HTTP/1.1" 418 - via_upstream - "-" 0 135 1 1 "-" "curl/7.78.0-DEV" "b4618d63-dde1-44a9-a8db-cff2dc9dd25e" "httpbin4" "10.244.0.10:80" outbound|80||httpbin4.test.svc.cluster.local 10.244.0.19:33772 10.96.187.16:80 10.244.0.19:37750 - default
    k exec -n "${NAMESPACE}" "${CLIENT110}" -c curl -- curl -s http://httpbin6/status/418
        -=[ teapot ]=-
    ...
    [2021-09-22T15:33:49.741Z] "GET /status/418 HTTP/1.1" 418 - via_upstream - "-" 0 135 1 0 "-" "curl/7.78.0-DEV" "19c67766-b786-4b76-900a-01b00549b4b0" "httpbin6" "[fd00:10:244::a]:80" outbound|80||httpbin6.test.svc.cluster.local [fd00:10:244::13]:56000 [fd00:10:96::c6a5]:80 [fd00:10:244::13]:60424 - default
    
    k edit svc -n "${NAMESPACE}" httpbin4 # remove port name
    k edit svc -n "${NAMESPACE}" httpbin6 # both MUST be changed to see the problem!!
    
    k exec -n "${NAMESPACE}" "${CLIENT111}" -c curl -- curl -s http://httpbin4/status/418
        -=[ teapot ]=-
    ...
    [2021-09-22T15:36:53.316Z] "GET /status/418 HTTP/1.1" 418 - via_upstream - "-" 0 135 1 1 "-" "curl/7.78.0-DEV" "d9d49c3f-1fe1-4459-ade2-a62344c2ad98" "httpbin4" "10.244.0.10:80" outbound|80||httpbin4.test.svc.cluster.local 10.244.0.18:54818 10.96.187.16:80 10.244.0.18:58244 - default
    k exec -n "${NAMESPACE}" "${CLIENT111}" -c curl -- curl -s http://httpbin6/status/418
        -=[ teapot ]=-
    ...
    [2021-09-22T15:36:59.511Z] "GET /status/418 HTTP/1.1" 418 - via_upstream - "-" 0 135 3 2 "-" "curl/7.78.0-DEV" "4d1f75ea-ca9b-4da1-8887-ad7707fd2c8a" "httpbin6" "[fd00:10:244::a]:80" outbound|80||httpbin6.test.svc.cluster.local [fd00:10:244::12]:57726 [fd00:10:96::c6a5]:80 [fd00:10:244::12]:35988 - default
    
    
    k exec -n "${NAMESPACE}" "${CLIENT110}" -c curl -- curl -s http://httpbin4/status/418
        -=[ teapot ]=-
    ...
    [2021-09-22T15:37:05.464Z] "GET /status/418 HTTP/1.1" 418 - via_upstream - "-" 0 135 3 2 "-" "curl/7.78.0-DEV" "19a6f324-e791-4829-8cf3-eccc4a8b79f1" "httpbin4" "10.244.0.10:80" outbound|80||httpbin4.test.svc.cluster.local 10.244.0.19:42986 10.96.187.16:80 10.244.0.19:46964 - default
    k exec -n "${NAMESPACE}" "${CLIENT110}" -c curl -- curl -s http://httpbin6/status/418
        -=[ teapot ]=-
    ...
    [2021-09-22T15:37:10.349Z] "GET /status/418 HTTP/1.1" 418 - via_upstream - "-" 0 135 4 2 "-" "curl/7.78.0-DEV" "65a39f61-f58a-4257-9c84-7f57bfcb5985" "httpbin6" "[fd00:10:244::a]:80" outbound|80||httpbin6.test.svc.cluster.local [fd00:10:244::13]:36888 [fd00:10:96::c6a5]:80 [fd00:10:244::13]:41312 - default
    

    Proxy config dumps named-port-config-1112.txt unnamed-port-config-1104.txt unnamed-port-config-1112.txt named-port-config-1104.txt

    area/networking 
    opened by emike922 1
  • Unable to forward to external service via captureMode:NONE in v1.11

    Unable to forward to external service via captureMode:NONE in v1.11

    Bug Description

    I'm attempting to setup forwarding of TCP traffic from a local port (e.g. 127.0.0.1:8080) to an external service using a Sidecar config with an egress rule, setup with captureMode: NONE, along with a ServiceEntry for the service. This is basically following the 4th example on the Sidecar docs, where 127.0.0.1:3306 is forwarded to an external MySQL server.

    In an attempt to make this as easy to reproduce as possible, I setup a fresh kind cluster, installed istio with the "demo" configuration, and deployed the "helloworld" service to start. I then found a service exposed online on a high port to test with, and used that for this example:

    apiVersion: v1
    kind: Service
    metadata:
      name: helloworld
      labels:
        app: helloworld
        service: helloworld
    spec:
      ports:
      - port: 5000
        name: http
      selector:
        app: helloworld
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: helloworld-v1
      labels:
        app: helloworld
        version: v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: helloworld
          version: v1
      template:
        metadata:
          labels:
            app: helloworld
            version: v1
        spec:
          containers:
          - name: helloworld
            image: docker.io/istio/examples-helloworld-v1
            resources:
              requests:
                cpu: "100m"
            imagePullPolicy: IfNotPresent #Always
            ports:
            - containerPort: 5000
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: Sidecar
    metadata:
      name: default
      namespace: default
    spec:
      egress:
      - bind: 127.0.0.1
        captureMode: NONE
        port:
          number: 8080
          protocol: TCP
          name: web
        hosts:
        - "./www.asnt.org"
      - captureMode: IPTABLES
        hosts:
        - "*/*"
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: ServiceEntry
    metadata:
      name: external-svc
      namespace: default
    spec:
      resolution: DNS
      hosts:
      - www.asnt.org
      addresses:
      - 127.0.0.1
      exportTo: [.]
      location: MESH_EXTERNAL
      ports:
      - number: 8080
        name: web
        protocol: TCP
    

    On Istio v1.11.2, attempting to contact this service results in "Empty reply" error from curl:

    $ kubectl exec helloworld-v1-776f57d5f6-dc4l5 -- curl http://127.0.0.1:8080
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (52) Empty reply from server
    command terminated with exit code 52
    

    However, after uninstalling Istio v1.11.2, then reinstalling with Istio v1.10.4 and replacing the pod, it works fine:

    $ kubectl exec helloworld-v1-776f57d5f6-6d62j -- curl http://127.0.0.1:8080
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100   300  100   300    0     0   4541      0 --:--:-- --:--:-- --:--:--  4615
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
    <html>
    <head>
    <title>ASNT</title>
    <meta http-equiv="REFRESH" content="0;url=http://www.asnt.org"></HEAD>
    <BODY>
    <p>ASNT Has re-arranged, we're re-directing you, hopefully, to the page you were looking for.</p>
    </BODY>
    </HTML>
    

    I double-checked that reinstalling back to v1.11.2 breaks things with no change in config. It appears this is a regression of some sort, or maybe a breaking change that wasn't documented.

    Digging a little deeper, the main difference on Istio v1.11.2 seems to be that it does not create a cluster for this service. This results in a "Cluster not found" error in the istio-proxy logs when attempting to hit 127.0.0.1:8080:

    2021-09-22T14:37:19.567965Z	debug	envoy filter	[C1201] new tcp proxy session
    2021-09-22T14:37:19.568008Z	debug	envoy filter	[C1201] Cluster not found outbound|8080||www.asnt.org
    2021-09-22T14:37:19.568021Z	debug	envoy connection	[C1201] closing data_to_write=0 type=1
    2021-09-22T14:37:19.568034Z	debug	envoy connection	[C1201] closing socket: 1
    

    Version

    $ istioctl version
    client version: 1.11.2
    control plane version: 1.11.2
    data plane version: 1.11.2 (3 proxies)
    
    $ kubectl version --short
    Client Version: v1.21.3
    Server Version: v1.21.1
    

    Additional Information

    Target cluster context: kind-kind

    Running with the following config:

    istio-namespace: istio-system full-secrets: false timeout (mins): 30 include: { } exclude: { Namespaces: kube-system, kube-public, kube-node-lease, local-path-storage } AND { Namespaces: kube-system, kube-public, kube-node-lease, local-path-storage } end-time: 2021-09-22 09:49:47.301017 -0400 EDT

    The following Istio control plane revisions/versions were found in the cluster: Revision default: &version.MeshInfo{ { Component: "pilot", Info: version.BuildInfo{Version:"1.11.2", GitRevision:"96710172e1e47cee227e7e8dd591a318fdfe0326", GolangVersion:"", BuildStatus:"Clean", GitTag:"1.11.2"}, }, }

    The following proxy revisions/versions were found in the cluster: Revision default: Versions {1.11.2}

    Fetching proxy logs for the following containers:

    default/helloworld-v1/helloworld-v1-776f57d5f6-b4djm/helloworld default/helloworld-v1/helloworld-v1-776f57d5f6-b4djm/istio-proxy istio-system/istio-egressgateway/istio-egressgateway-6cb7bdc7fb-bzbj5/istio-proxy istio-system/istio-ingressgateway/istio-ingressgateway-694d8d7656-rdf9g/istio-proxy istio-system/istiod/istiod-6c68579c55-qwlw4/discovery

    Fetching Istio control plane information from cluster.

    The following fetches are still running: kubectl get --all-namespaces all,jobs,ingresses,endpoints,customresourcedefinitions,configmaps,events -o yaml kubectl get --all-namespaces authorizationpolicies.security.istio.io,destinationrules.networking.istio.io,envoyfilters.networking.istio.io,gateways.networking.istio.io,istiooperators.install.istio.io,peerauthentications.security.istio.io,requestauthentications.security.istio.io,serviceentries.networking.istio.io,sidecars.networking.istio.io,telemetries.telemetry.istio.io,virtualservices.networking.istio.io,workloadentries.networking.istio.io,workloadgroups.networking.istio.io -o yaml

    The following fetches are still running: kubectl get --all-namespaces all,jobs,ingresses,endpoints,customresourcedefinitions,configmaps,events -o yaml kubectl get --all-namespaces authorizationpolicies.security.istio.io,destinationrules.networking.istio.io,envoyfilters.networking.istio.io,gateways.networking.istio.io,istiooperators.install.istio.io,peerauthentications.security.istio.io,requestauthentications.security.istio.io,serviceentries.networking.istio.io,sidecars.networking.istio.io,telemetries.telemetry.istio.io,virtualservices.networking.istio.io,workloadentries.networking.istio.io,workloadgroups.networking.istio.io -o yaml

    The following fetches are still running: kubectl get --all-namespaces all,jobs,ingresses,endpoints,customresourcedefinitions,configmaps,events -o yaml kubectl get --all-namespaces authorizationpolicies.security.istio.io,destinationrules.networking.istio.io,envoyfilters.networking.istio.io,gateways.networking.istio.io,istiooperators.install.istio.io,peerauthentications.security.istio.io,requestauthentications.security.istio.io,serviceentries.networking.istio.io,sidecars.networking.istio.io,telemetries.telemetry.istio.io,virtualservices.networking.istio.io,workloadentries.networking.istio.io,workloadgroups.networking.istio.io -o yaml

    The following fetches are still running: kubectl get --all-namespaces authorizationpolicies.security.istio.io,destinationrules.networking.istio.io,envoyfilters.networking.istio.io,gateways.networking.istio.io,istiooperators.install.istio.io,peerauthentications.security.istio.io,requestauthentications.security.istio.io,serviceentries.networking.istio.io,sidecars.networking.istio.io,telemetries.telemetry.istio.io,virtualservices.networking.istio.io,workloadentries.networking.istio.io,workloadgroups.networking.istio.io -o yaml kubectl get --all-namespaces all,jobs,ingresses,endpoints,customresourcedefinitions,configmaps,events -o yaml

    The following fetches are still running: kubectl get --all-namespaces authorizationpolicies.security.istio.io,destinationrules.networking.istio.io,envoyfilters.networking.istio.io,gateways.networking.istio.io,istiooperators.install.istio.io,peerauthentications.security.istio.io,requestauthentications.security.istio.io,serviceentries.networking.istio.io,sidecars.networking.istio.io,telemetries.telemetry.istio.io,virtualservices.networking.istio.io,workloadentries.networking.istio.io,workloadgroups.networking.istio.io -o yaml kubectl get --all-namespaces all,jobs,ingresses,endpoints,customresourcedefinitions,configmaps,events -o yaml

    The following fetches are still running: kubectl get --all-namespaces authorizationpolicies.security.istio.io,destinationrules.networking.istio.io,envoyfilters.networking.istio.io,gateways.networking.istio.io,istiooperators.install.istio.io,peerauthentications.security.istio.io,requestauthentications.security.istio.io,serviceentries.networking.istio.io,sidecars.networking.istio.io,telemetries.telemetry.istio.io,virtualservices.networking.istio.io,workloadentries.networking.istio.io,workloadgroups.networking.istio.io -o yaml kubectl get --all-namespaces all,jobs,ingresses,endpoints,customresourcedefinitions,configmaps,events -o yaml

    The following fetches are still running: kubectl get --all-namespaces authorizationpolicies.security.istio.io,destinationrules.networking.istio.io,envoyfilters.networking.istio.io,gateways.networking.istio.io,istiooperators.install.istio.io,peerauthentications.security.istio.io,requestauthentications.security.istio.io,serviceentries.networking.istio.io,sidecars.networking.istio.io,telemetries.telemetry.istio.io,virtualservices.networking.istio.io,workloadentries.networking.istio.io,workloadgroups.networking.istio.io -o yaml

    The following fetches are still running: kubectl get --all-namespaces authorizationpolicies.security.istio.io,destinationrules.networking.istio.io,envoyfilters.networking.istio.io,gateways.networking.istio.io,istiooperators.install.istio.io,peerauthentications.security.istio.io,requestauthentications.security.istio.io,serviceentries.networking.istio.io,sidecars.networking.istio.io,telemetries.telemetry.istio.io,virtualservices.networking.istio.io,workloadentries.networking.istio.io,workloadgroups.networking.istio.io -o yaml

    The following fetches are still running: kubectl get --all-namespaces authorizationpolicies.security.istio.io,destinationrules.networking.istio.io,envoyfilters.networking.istio.io,gateways.networking.istio.io,istiooperators.install.istio.io,peerauthentications.security.istio.io,requestauthentications.security.istio.io,serviceentries.networking.istio.io,sidecars.networking.istio.io,telemetries.telemetry.istio.io,virtualservices.networking.istio.io,workloadentries.networking.istio.io,workloadgroups.networking.istio.io -o yaml

    The following fetches are still running: kubectl get --all-namespaces authorizationpolicies.security.istio.io,destinationrules.networking.istio.io,envoyfilters.networking.istio.io,gateways.networking.istio.io,istiooperators.install.istio.io,peerauthentications.security.istio.io,requestauthentications.security.istio.io,serviceentries.networking.istio.io,sidecars.networking.istio.io,telemetries.telemetry.istio.io,virtualservices.networking.istio.io,workloadentries.networking.istio.io,workloadgroups.networking.istio.io -o yaml

    Running istio analyze on all namespaces and report as below: Analysis Report: Warning [IST0134] (ServiceEntry external-svc.default) ServiceEntry addresses are required for this protocol. Info [IST0102] (Namespace istio-system) The namespace is not enabled for Istio injection. Run 'kubectl label namespace istio-system istio-injection=enabled' to enable it, or 'kubectl label namespace istio-system istio-injection=disabled' to explicitly mark it as not needing injection. Creating an archive at /Users/jbotelho2/src/istio-stacker/bug-report.tar.gz. Cleaning up temporary files in /var/folders/km/pm4z02s96qv3jjypt2gnc5v40000gr/T/bug-report. Done.

    bug-report.tar.gz

    area/networking 
    opened by jbotelho2-bb 2
  • [release-1.11] fix duplicate port in samples

    [release-1.11] fix duplicate port in samples

    see #35312

    size/XS cla: yes release-notes-none 
    opened by stevenctl 0
  • [release-1.11] allow discoveryAddress to be set via proxyConfig for V…

    [release-1.11] allow discoveryAddress to be set via proxyConfig for V…

    manual cherry-pick of #35290

    size/L cla: yes 
    opened by stevenctl 3
Releases(1.11.2)
Owner
Istio
Connect, secure, control, and observe services.
Istio
Sample full stack micro services application built using the go-Micro framework.

goTemp goTemp is a full stack Golang microservices sample application built using go-micro. The application is built as a series of services that prov

null 34 Aug 9, 2021
goTempM is a full stack Golang microservices sample application built on top of the Micro platform.

goTempM is a full stack Golang microservices sample application built on top of the Micro platform.

null 13 Jul 9, 2021
ftgogo - event-driven architecture demonstration application

ftgogo (food-to-gogo) is a Golang implementation of the FTGO application described in the book "Microservice Patterns" by Chris Richardson. A library edat was developed to provide for Golang many of the solutions that Eventuate, the framework used by FTGO, provides for Java.

Michael Stack 39 Sep 20, 2021
A code generator that turns plain old Go services into RPC-enabled (micro)services with robust HTTP APIs.

Frodo is a code generator and runtime library that helps you write RPC-enabled (micro) services and APIs.

Monadic 13 Aug 31, 2021
Connect, secure, control, and observe services.

Istio An open platform to connect, manage, and secure microservices. For in-depth information about how to use Istio, visit istio.io To ask questions

Istio 28k Sep 14, 2021
Automatic Service Mesh and RPC generation for Go micro services, it's a humble alternative to gRPC with Istio.

Mesh RPC MeshRPC provides automatic Service Mesh and RPC generation for Go micro services, it's a humble alternative to gRPC with Istio. In a nutshell

AstraNet Toolkit 68 Sep 16, 2021
GO微服务框架

goms:GO微服务框架 说明: 本项目是由个人开发的微服务基础框架,项目正在积极开发中,很期待得到你的star。 特性: 1、服务注册发现 2、网关路由分发 3、负载均衡策略 4、服务调用安全 5、服务重试策略 6、分布式链路追踪 7、可选组件Rabbitmq、Elasticsearch 8、支持

Panco 26 Sep 23, 2021
Micro is a platform for cloud native development

Micro Overview Micro addresses the key requirements for building services in the cloud. It leverages the microservices architecture pattern and provid

Micro 10.4k Sep 17, 2021
A microservice gateway developed based on golang.With a variety of plug-ins which can be expanded by itself, plug and play. what's more,it can quickly help enterprises manage API services and improve the stability and security of API services.

Goku API gateway is a microservice gateway developed based on golang. It can achieve the purposes of high-performance HTTP API forwarding, multi tenant management, API access control, etc. it has a powerful custom plug-in system, which can be expanded by itself, and can quickly help enterprises manage API services and improve the stability and security of API services.

Eolink 21 Sep 15, 2021
Fast, intuitive, and powerful configuration-driven engine for faster and easier REST development

aicra is a lightweight and idiomatic configuration-driven engine for building REST services. It's especially good at helping you write large APIs that remain maintainable as your project grows.

xdrm-brackets 3 Aug 4, 2021
Netflix's Hystrix latency and fault tolerance library, for Go

hystrix-go Hystrix is a great project from Netflix. Hystrix is a latency and fault tolerance library designed to isolate points of access to remote sy

keith 3.4k Sep 24, 2021
Design-based APIs and microservices in Go

Goa is a framework for building micro-services and APIs in Go using a unique design-first approach. Overview Goa takes a different approach to buildin

Goa 4.4k Sep 24, 2021
An production-ready microservice using Go and a few lightweight libraries

Go Micro Example This small sample project was created as a collection of the various things I've learned about best practices building microservices

Sean K Smith 65 Aug 4, 2021
Micro-service framework in Go

Kite Micro-Service Framework Kite is a framework for developing micro-services in Go. Kite is both the name of the framework and the micro-service tha

Koding, Inc. 3.1k Sep 8, 2021
Starter code for writing web services in Go

Ultimate Service Copyright 2018, 2019, 2020, 2021, Ardan Labs [email protected] Ultimate Service 2.0 Video If you are watching the Ultimate Service v

Ardan Labs 1.6k Sep 17, 2021
📕 twtxt is a Self-Hosted, Twitter™-like Decentralised microBlogging platform. No ads, no tracking, your content, your data!

twtxt ?? twtxt is a Self-Hosted, Twitter™-like Decentralised micro-Blogging platform. No ads, no tracking, your content, your data! Technically twtxt

Twt 505 Jul 26, 2021
Microservices using Go, RabbitMQ, Docker, WebSocket, PostgreSQL, React

Microservices A basic example of microservice architecture which demonstrates communication between a few loosely coupled services. Written in Go Uses

null 70 Sep 12, 2021
Sample cloud-native application with 10 microservices showcasing Kubernetes, Istio, gRPC and OpenCensus.

Online Boutique is a cloud-native microservices demo application. Online Boutique consists of a 10-tier microservices application. The application is

Google Cloud Platform 10.9k Sep 26, 2021
Erda is an open-source platform created by Terminus to ensure the development of microservice applications.

Erda is an open-source platform created by Terminus to ensure the development of microservice applications.

Erda 1.9k Sep 18, 2021