Skip to content

A Kotlin based gRPC client/server with envoy as load balancer. Envoy discovers the service using EDS.

Notifications You must be signed in to change notification settings

masoodfaisal/grpc-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

gRPC Server side load balancing with Envoy proxy

gRPC is an open source high performance RPC framework that runs anywhere.

In this blog, I am showcasing how envoy proxy can be used to provide server side load balancing. This is not a tutorial for gRPC or envoy proxy, if you want to learn these technologies, google is your friend. You can start with learn envoy, gRPC and protocl buffers.

Setup

  • First we need to define a protobuf message which will serves as the contract between the client and the server. Please refer to event.proto
syntax  = "proto3";

import "google/protobuf/empty.proto";

package event;

option java_package = "com.proto.event";
option java_multiple_files = true;


message Event {
    int32 event_id = 1;
    string event_name = 2;
    repeated string event_hosts = 3;
}

enum EVENT_TYPE {
    UNDECLARED = 0;
    BIRTHDAY = 1;
    MARRIAGE = 2;
}

message CreateEventResponse{
    string success = 1;
}

message AllEventsResponse{
    Event event = 1;
}

service EventsService{
    rpc CreateEvent(Event) returns (CreateEventResponse) {};
    rpc AllEvents(google.protobuf.Empty) returns (stream AllEventsResponse) {};
}
  • This message will then be used by the gradle gRPC plugin to generate stubs. These stubs will be used by client and the server. You can run gradle's generateProto task to generate the stubs.
  • Now it is time to write the server
val eventServer = ServerBuilder.forPort(50051)
            .addService(EventsServiceImpl()) //refer to the server implementation
            .build()
    eventServer.start()
    println("Event Server is Running now!")

    Runtime.getRuntime().addShutdownHook( Thread{
        eventServer.shutdown()
    } )


    eventServer.awaitTermination()
  • Once the boiler plate code of the server is finished [see previous step], we write the server business logic.
    override fun createEvent(request: Event?, responseObserver: StreamObserver<CreateEventResponse>?) {
        println("Event Created ")
        responseObserver?.onNext(CreateEventResponse.newBuilder().setSuccess("true").build())
        responseObserver?.onCompleted()
    }
  • Let's write a client to consume our events service.
fun main(args: Array<String>) {
    var eventsChannel = ManagedChannelBuilder.forAddress("10.0.0.112", 8080)
            .usePlaintext()
            .build()

    var eventServiceStub = EventsServiceGrpc.newBlockingStub(eventsChannel)

    eventServiceStub.createEvent(Event.newBuilder().setEventId(1).setEventName("Event 001").build())

    eventsChannel.shutdown()
}
  • I have copied the server code into another file and change the port number to mimic multiple instances of our events service.
  • Envoy proxy configuration has three parts. All these settings are in envoy yaml
    • A frontend service. This service will receive request from the clients.
     listeners:
        - name: envoy_listener
          address:
            socket_address: { address: 0.0.0.0, port_value: 8080 }
          filter_chains:
            - filters:
                - name: envoy.http_connection_manager
                  config:
                    stat_prefix: ingress_http
                    codec_type: AUTO
                    route_config:
                      name: local_route
                      virtual_hosts:
                        - name: local_service
                          domains: ["*"]
                          routes:
                            - match: { prefix: "/" }
                              route: { cluster: grpc_service }
                    http_filters:
                      - name: envoy.router
     
  • A backend service (the name is grpc_service in the envoy.yaml file). The frontend service will loadbalance the calls to this set of servers. Note that this doesnot know about the location of the actual backend service. The location of backend service aka serice discovery is provided via an EDS service - see next bullet point.
        - name: grpc_service
          connect_timeout: 5s
          lb_policy: ROUND_ROBIN
          http2_protocol_options: {}
          type: EDS
          eds_cluster_config:
            eds_config:
              api_config_source:
                api_type: REST
                cluster_names: [eds_cluster]
                refresh_delay: 5s
  • An (optional - you can provide fixed list of servers too) EDS endpoint. This is another service which will provide the list of backend endpoints. This way envoy can dynamically adjust to the available servers. I have written this EDS service as a simple class.
    - name: eds_cluster
          connect_timeout: 5s
          type: STATIC
          hosts: [{ socket_address: { address: 10.0.0.112, port_value: 7070 }}]

Execution

  • Copy the project locally.
git clone https://github.com/masoodfaisal/grpc-example.git
  • Build the project using gradle
./gradlew generateProto
./gradlew build
  • Run the EDS Server
./gradlew -PmainClass=com.faisal.eds.EDSServerKt execute
  • Run the first and second instance of the server
./gradlew -PmainClass=com.faisal.grpc.server.EventServerKt execute
./gradlew -PmainClass=com.faisal.grpc.server.EventServer2Kt execute
  • Run enovy proxy
cd envoy-docker
docker build -t envoy:grpclb .
docker run -p 9090:9090 -p 8080:8080 envoy:grpclb 
  • Run client, multiple times and you can see the calls are being distributed in a round robin
./gradlew -PmainClass=com.faisal.grpc.client.EventClientKt execute

About

A Kotlin based gRPC client/server with envoy as load balancer. Envoy discovers the service using EDS.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published