GRPC

? gRPCBased on the book Practical gRPC - - Star friendsExamples based on SWAPISWAPI = Star Wars API, code at backstopmedia/gRPC- book-exampleC6 - Streaming gRPC calls3 methods of streaming supportedServer streaming - server sending records to a clientClient streaming - the server is the recipientBidirectional streaming aka bidiExample of each streaming model at streamingKey difference is the use of the keyword stream in the service definition as shown The stream is then provided by iterating over the the source and for each record wanting to be streamed the send operation on the generated stub is invoked Not the capture of an error code on each sendError responses can be triggered from events such as client breaking the connectionOnce the stream is complete the operation can exitClient streaminghere the 1st difference is the key word streaming is associated with the request not the responseThe server opens the connection and then takes a buffer feed and connects it to the sub framework generated. Then iterates calling the recv to take the payload until the stream ends with an end of file Once the end of file is identified then response is constructed and sentBidirectional streamingThe key difference here is that rather than using sendandclose , the send operation is usedC7 - Advanced gRPCHow are errors encoded from servers to clientsA server reporting status uses a google.rpc.status message definition The code reflects the standard gRPC error codeMessage is for system level message information - don’t use it for user centric messagingDetails part, allows the incorporation of any message definition. Therefore more semantic meaning can be communicated by defining suitable messages and adding it to this structureWhen a gRPC server responds it includes the status of the response in the headersIn normal conditions you would see :status with a value of 200 - relating back to standard HTTP header definitionsThe gRPC specific status is included in the header using the gRPC-status header value reflecting the 16 codes definedThe additional details are then added as grpc-status-details-bin and encoded as base64 Depending upon the language additional elements may need to be includedTo interact with the standard message it is necessary to interact with standard helpsWhen the message is received by the client these values are then mapped back to the standard message definitionGRPC interceptorsWhen sending or receiving RPC calls it maybe desirable to inject additional information or apply steps before processing a call fo4 example applying authenticationThe ability to do this utilises an interceptor patternInterceptors need to be registered against callOperation sequence is as followsCheck to see if an interceptor has been registeredIf an interceptor exists pass the method name and message to the interceptorInterceptor actions necessary taskThe stub takes the interceptor output (could be a modified message) and continues BAUIntercepts will work for single messages or streamsLogging is a good example of using interceptors as illustrated LimitationsSome implementations of the framework don’t support rerunning of failed callsE.g Ruby if you call yield after an exception will trigger a new errorCan’t modify responses once received.g. Changing case of a valueAuthentication using JSON web tokensPrimitives are provided for establish8ng SSL/TLSMutual TLS is also possibleAuthentication isn’t offered OOTB1 option is to built an interceptor that pulls metadata to validate JWT token Uses an env var to decide if authentication should be appliedRetrieves the relevant values from the metadata using the context set by the gRPC requestFinally valid the token Client sets up the tokenThen makes the request, passing the metadata with the token and the core request objectHow to implement timeoutsRPC cancellationClient cancellation. Requests are sent to the server, so it will knowThis is imp,meted via an operation on the message. Requires the execute to be invoked to initiate the cancel comms to the serverComes into its own as a capability when requests are parallelised, and the result of 1 request may mean you want to cancel the remaining requests.Another cancellation use case is ‘aggressive hedging’ where the same request gets sent to multiple servers, and the 1st response is taken, the remaining then get cancelledIn Go and Java the call is associated to a context and cancelling the context will cancel all the associated requestsC8 - HTTP/2 overviewGoogle’s SPDY project was the key input into HTTP/2HTTP 1.1 improved 1.0 with features likeKeep alive connectionsChunked encoded transfersByte serving of range requestsRequest pipelineHTTP 2 over 1.1Payloads are binary not textFramesStreams, with a life cycle ofIdleOpenReservedHalf closedClosedMultiplexingFlow controlC9 - Load BalancingZookeeper as a lookaside load balancer . .C10 - Service evolution with gRPCRules / constraints to work within to ensure compatibilityBinary and source compatibilityThe code generator creates 2 interfacesABI - application nary interfaceAPI - method signatures used by the consumer of the generated codeABI is only an issue when changing major versions of protocChanging the protobuf definitions does impact the APISo renaming or removing the method defined in the protobuf will obviously break the contractChanging the message types in the signature will also be contract breakingWhen not supplying a lib to handle the service consumption should ensure the languages allowed to be used are not brokenConsider just adding a versioning numberUser google.protobuf.Empty - this can then be used in the futureYou can, without breaking ...Add servicesMethodsMessage definitionsFieldsActions to avoid breaks as a result of adding fields etcNew fields get given a default valueIf you want to know whether a primitive has just been given its default value, you can wrap the value - there are default set of wrappers availableC11 - gRPC gatewayCan use a gRPC gateway to provide a proxy layer that offers a traditional REST based implementation of the gRPC interface This can be realised by incorporating a option with the google.api.http and the URI it should reside on as shown Elements in the message definition can be included into the URI to show how the values map e.g. option (google.api.http) = {get: “/sfapi/v1/films/{id}”Mapping of http and gRPC responses It is also possible to define multiple endpoints to the same gRPC by using additional_bindings in the optionsProtoc can also generate a Swagger representationIt is possible to also create custom protoc extensionsC12 - Debugging gRPCCLI toolsGrpcnode for quickly making servers and clients in JavaScriptgrpcc- REPL clientgrpc_cli - CLI toolEvans - Expressive universal gRPC (CLI) clientgrpcurl - Like cURL, but for gRPC: Command-line tool for interacting with gRPC serversdanby - A grpc proxy for the browserdocker-protoc - Dockerized protoc, grpc-gateway, and grpc_cli commands bundled with Google API libraries.prototool Useful “Swiss Army Knife” for processing proto filesIf you don’t have the IDL then it is still possible to decompile the message, although the result will not be as good.protoc — decode_raw[konsumer/rawproto to extract the same sort of data as JSON or proto, resourcesDocker container with all the necessary tools to generate all the supported languages with protoc - related resources - buffers web site - - GRPC basicsTerminologyCompiling protocolNeed to use protocNeed to understand specifics for the languageGo code generation needs a plugin to be installed and additional output argumentsGRPC has a status capabilityOOTB there are 17 error codesStatus codes are not related to HTTP codesCodes run from 0 to 16Status code 0 = no errorHow the code is returned is dependent upon the libraryCodes areExposing servicesOnce code is generated, different languages have different approaches to exposing the servicesGo generates code that supports service registrationA server can only support 1 implementation of an interface - attempt more and a panic condition will be triggeredGo example of registering a service Shutdown and load balancingBefore calling a graceful stop - should remove the server from the pool of serversClient connection process like server side differs by lasnguageExampleestablishing connectivity in Go ... With a stub in existence, we can now call the stub, which will invoke the serverIt is possible to also exchange metadata as wellMetadata is represented as key value pairsPairs are both string valuesDirect analogue to HTTP request and response headersOften used to address cross cutting concerns rare than multiple additional fields in the body of each requestContent-type in the header is application/grpcMay include sub content type e.g. +JSON or +protoTransfer-encoding will be includedThere are some headers reservedHTTP/2 reserves headers starting with :GRPC reserves the prefix grpc- for internal use e.g. grpc-status gRPC-timeoutMetadata ending with -bin is assumed to denote binary but will be treated as base64 as bin values are not allowed in headersMetadata values are all converted to lower case when they reach the network layerExample in Go interacting with metadata Go example or handling the gRPC The client side would look likemConstructing a response Executing an operation having built the response C3 - What are protocol buffersProvide benefits, often ignored or overlooked ...Bandwidth optimisationCode generstionFormal contractsExample ... Protocol buffers are now on v3 which offers the benefitsFields are optional by default so reserved word optional was droppedGrouped fields are no longer supported, better to use nested firlddBasic proto definitions - structure and contentMapping between protobuf definition and native language types.... Message can be nested to create structureAssociative structures are allowed for example maps Ability to describe parts as one of something Values can be described as enumerations Basic definition - serviceThe actual operations are defined as services in the proto file references the message structures To separate your basic structures (messages) separate so they can be reused, it is possible to define import statements If you need to have cascading imports then the import has to declare the import as public The use of public import also assists when wanting to reorganise file dependencies, this can be done by ...Move the proto file to its new locationIn the old location create a proto file that has import public the relocated proto fileIt is considered good practise to put fields that are constantly used in a structure 1st this helps with the encodingPolymorphism can be done in different ways depending upon the version of protoProto 2 - use ExtensionsProto 3 - use the Any statementHow the language handles this differsPackagesPackages can be defined in proto filesThis impacts languages in different ways e.g.C++ packages become namespaces.proto package becomes a .java packagePython ignores it - only interested in directory structureVersioningWhat happens with the deprecated fields? The tip is to create all fields as optional. Proto version 3 has adopted the optional statement by default when defining fields. This will avoid future deprecations to raise errors.What happens with the new fields? The tip is to add them always at the end. Replacing old tags should be forbidden, so the association of a field with its tag number should always remain the same.From the proto file we can generate the necessary code, but we can define characteristics/ language specific actions using options declarationsPackage namesoption go_package = “backstopmedia/gRPC-book- example/com/starwars”;option java_package = “com.starwars”;Class namesoption php_class_prefix = “SWS”;option java_outer_classname = “StarWars”Multiple filesoption java_multiple_files = true;Fields also have extra optionsDeprecation e.goptional int32 old_field = 1 [deprecated=true];Pack (allows space optimisation for arrays e.g. every element of an array is of the same type)optional repeated int32 field = 1 [packed=true];For proto2 it is possible to create custom options definitions, but not in proto3EncodingEach field is subject to binary encoded - who is done based on the data typeEach encoding is then concatenated in the Oder of the definitionThe encoding applies different strategies to determine where each field starts and stops variant works by taking the binary representation of the number using 7 of the 8 bits in each byte for the number and the remaining bit to denote which byte is the last one to represent the number. As a result the smallest space for a number is used.Signed interferes use a different approach to address the use of the additional bit for neg values - ZigZag encodingNon variant types use the same encoding with little endian byte orderingStrings as variable length as ASCII encoded with a length Embedded messages are also length delimitedProto but should provide effective performance forParsingAdding new message fieldsIgnoring / deprecating fieldsMinimum payload is driven by the data not structural definitionSupport for hierarchical data structuresC1& 2 - BackgroundThis history of what the “g” stands for is documented in their main repo onGithub: . md.RPC = Remote Procedure CallNot bound to HTTP etc in the way REST isRPC characteristicsFollows a programming paradigm of method name and parametersTypically language agnosticInterfaces defined using an IDLIDL = Interface Definition LanguageOften include code generation toolsOften only uses a subset of HTTP, may make direct use of TCPgRPCLayered as..Above the channel is the stub layerInterface constraints definedData types definedDoes the end point support 1 call or a streamData encodingLinks IDL to the channelAfter HTTP2 is channel layerThin abstraction on the transportProvides means to associate service and method namesAdditional metadataCall and response is completed once 0 or more metadata values are provided including successor failure response trailersMessage is just a series of bytesUses HTTP/2 as transport layerSame semantics as HttpBenefits areAbility to multiplex parallel requests on the same network connectionSupports full duplex communicationProtocol Buffer aka protobufsIDL for definingServicesMethodsMessagesgRPC use casesMicroservicesClient server applicationsIntegrations and APIsMany google services us gRPC as their APITools to make it easy to deliver REST+JSON using gRPC-gatewayBrowser based web appsSuperficially poor as JavaScript can’t use HTTP/2But not an issue for browser XHRsBenefit of gRPC over techniques such as REST+JSONPerformance and efficiencySimple to understand leading to greater productivityStreamingSecurityGRPC officially supports ...Other RPC solutionsKey evolutionsStreaming available with gRPC and othersStreaming removes the issue of consuming a large payload before next operation.Client doesn’t need to be have entire content in memory before passing - removing resources pressuresCap’n ProtoPromise pipelining - subsequent requests can reference previous requests via Ids ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download

To fulfill the demand for quickly locating and searching documents.

It is intelligent file search solution for home and business.

Literature Lottery

Related searches