question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
To facilitate working with Avro in Scala, I'd like to define a case class based on the schema stored with a .avro file. I could try: Writing a .scala case class definition by hand. Programmatically writing strings to a .scala file Spoof the case class definition with a bytecode library like ObjectWeb's ASM SpecificCompiler tricks? Modifying an existing case classed definition at runtime? Thanks, any advice is appreciated. -Julian
I've been hacking on a little project called Scalavro to go the other way (Scala types to Avro schemas). It also gives you direct binary I/O. Simple Example: package com.gensler.scalavro.tests import com.gensler.scalavro.types.AvroType case class Person(name: String, age: Int) val personAvroType = AvroType[Person] personAvroType.schema which yields: { "name": "Person", "type": "record", "fields": [ {"name": "name", "type": "string"}, {"name": "age", "type": "int"} ], "namespace": "com.gensler.scalavro.tests" } There are many more examples on the project site (linked above) and in the scalatest specs. I have plans to host the binaries on Sonatype OSS in the near future, but for now you can pull the source from github and sbt publish-local if you want to try it out. Update: Scalavro is now available on Maven Central.
Avro
15,607,038
17
I am writing a spark job using python. However, I need to read in a whole bunch of avro files. This is the closest solution that I have found in Spark's example folder. However, you need to submit this python script using spark-submit. In the command line of spark-submit, you can specify the driver-class, in that case, all your avrokey, avrovalue class will be located. avro_rdd = sc.newAPIHadoopFile( path, "org.apache.avro.mapreduce.AvroKeyInputFormat", "org.apache.avro.mapred.AvroKey", "org.apache.hadoop.io.NullWritable", keyConverter="org.apache.spark.examples.pythonconverters.AvroWrapperToJavaConverter", conf=conf) In my case, I need to run everything within the Python script, I have tried to create an environment variable to include the jar file, finger cross Python will add the jar to the path but clearly it is not, it is giving me unexpected class error. os.environ['SPARK_SUBMIT_CLASSPATH'] = "/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5.1.0.jar" Can anyone help me how to read avro file in one python script?
Spark >= 2.4.0 You can use built-in Avro support. The API is backwards compatible with the spark-avro package, with a few additions (most notably from_avro / to_avro function). Please note that module is not bundled with standard Spark binaries and has to be included using spark.jars.packages or equivalent mechanism. See also Pyspark 2.4.0, read avro from kafka with read stream - Python Spark < 2.4.0 You can use spark-avro library. First lets create an example dataset: import avro.schema from avro.datafile import DataFileReader, DataFileWriter schema_string ='''{"namespace": "example.avro", "type": "record", "name": "KeyValue", "fields": [ {"name": "key", "type": "string"}, {"name": "value", "type": ["int", "null"]} ] }''' schema = avro.schema.parse(schema_string) with open("kv.avro", "w") as f, DataFileWriter(f, DatumWriter(), schema) as wrt: wrt.append({"key": "foo", "value": -1}) wrt.append({"key": "bar", "value": 1}) Reading it using spark-csv is as simple as this: df = sqlContext.read.format("com.databricks.spark.avro").load("kv.avro") df.show() ## +---+-----+ ## |key|value| ## +---+-----+ ## |foo| -1| ## |bar| 1| ## +---+-----+
Avro
29,759,893
17
I am new to Hadoop and programming, and I am a little confused about Avro schema evolution. I will explain what I understand about Avro so far. Avro is a serialization tool that stores binary data with its json schema at the top. The schema looks like this. { "namespace":"com.trese.db.model", "type":"record", "doc":"This Schema describes about Product", "name":"Product", "fields":[ {"name":"product_id","type": "long"}, {"name":"product_name","type": "string","doc":"This is the name of the product"}, {"name":"cost","type": "float", "aliases":["price"]}, {"name":"discount","type": "float", "default":5} ] } Now my question is why we need evolution? I have read that we can use default in the schema for new fields; but if we add a new schema in the file, that earlier schema will be overwritten. We cannot have two schemas for a single file. Another question is, what are reader and writer schemas and how do they help?
If you have one avro file and you want to change its schema, you can rewrite that file with a new schema inside. But what if you have terabytes of avro files and you want to change their schema? Will you rewrite all of the data, every time the schema changes? Schema evolution allows you to update the schema used to write new data, while maintaining backwards compatibility with the schema(s) of your old data. Then you can read it all together, as if all of the data has one schema. Of course there are precise rules governing the changes allowed, to maintain compatibility. Those rules are listed under Schema Resolution. There are other use cases for reader and writer schemas, beyond evolution. You can use a reader as a filter. Imagine data with hundreds of fields, of which you are only interested in a handful. You can create a schema for that handful of fields, to read only the data you need. You can go the other way and create a reader schema which adds default data, or use a schema to join the schemas of two different datasets. Or you can just use one schema, which never changes, for both reading and writing. That's the simplest case.
Avro
39,135,471
17
According to this question on nesting Avro schemas, the right way to nest a record schema is as follows: { "name": "person", "type": "record", "fields": [ {"name": "firstname", "type": "string"}, {"name": "lastname", "type": "string"}, { "name": "address", "type": { "type" : "record", "name" : "AddressUSRecord", "fields" : [ {"name": "streetaddress", "type": "string"}, {"name": "city", "type": "string"} ] }, } ] } I don't like giving the field the name address and having to give a different name (AddressUSRecord) to the field's schema. Can I give the field and schema the same name, address? What if I want to use the AddressUSRecord schema in multiple other schemas, not just person? If I want to use AddressUSRecord in another schema, let's say business, do I have to name it something else? Ideally, I'd like to define AddressUSRecord in a separate schema, then let the type of address reference AddressUSRecord. However, it's not clear that Avro 1.8.1 supports this out-of-the-box. This 2014 article shows that sub-schemas need to be handled with custom code. What the best way to define reusable schemas in Avro 1.8.1? Note: I'd like a solution that works with Confluent Inc.'s Schema Registry. There's a Google Groups thread that seems to suggest that Schema Registry does not play nice with schema references.
Can I give the field and schema the same name, address? Yes, you can name the record with the same name as the field name. What if I want to use the AddressUSRecord schema in multiple other schemas, not just person? You can use multiple schemas using a couple of techniques: the avro schema parser clients (JVM and others) allow you to specify multiple schemas, usually through the names parameter (the Java Schema$Parser/parse method allows multiple schema String arguments). You can then specify dependant Schemas as a named type: { "type": "record", "name": "Address", "fields": [ { "name": "streetaddress", "type": "string" }, { "name": "city", "type": "string" } ] } And run this through the parser before the parent schema: { "name": "person", "type": "record", "fields": [ { "name": "firstname", "type": "string" }, { "name": "lastname", "type": "string" }, { "name": "address", "type": "Address" } ] } Incidentally, this allows you to parse from separate files. Alternatively, you can also parse a single Union schema that references schemas in the same way: [ { "type": "record", "name": "Address", "fields": [ { "name": "streetaddress", "type": "string" }, { "name": "city", "type": "string" } ] }, { "type": "record", "name": "person", "fields": [ { "name": "firstname", "type": "string" }, { "name": "lastname", "type": "string" }, { "name": "address", "type": "Address" } ] } ] I'd like a solution that works with Confluent Inc.'s Schema Registry. The schema registry does not support parsing schemas separately, but it does support the latter example of parsing into a union type.
Avro
40,854,529
17
I have some json data that looks like this: { "id": 1998983092, "name": "Test Name 1", "type": "search string", "creationDate": "2017-06-06T13:49:15.091+0000", "lastModificationDate": "2017-06-28T14:53:19.698+0000", "lastModifiedUsername": "[email protected]", "lockedQuery": false, "lockedByUsername": null } I am able to add the lockedQuery null value to a GenericRecord object without issue. GenericRecord record = new GenericData.Record(schema); if(json.isNull("lockedQuery")){ record.put("lockedQuery", null); } However, later when I attempt to write that GenericRecord object to an avro file I get a null pointer exception. File file = new File("~/test.arvo"); DatumWriter<GenericRecord> datumWriter = new GenericDatumWriter<>(schema); DataFileWriter<GenericRecord> dataFileWriter = new DataFileWriter<>(datumWriter); dataFileWriter.create(schema, file); for(GenericRecord record: masterList) { dataFileWriter.append(record); // NULL POINTER HERE } When I run that code I get the following exception. Any tips on how to process a null value into an Avro file much appreciated. Thanks in advance. java.lang.NullPointerException: null of boolean in field lockedQuery of com.mydomain.test1.domain.MyAvroRecord Exception in thread "main" java.lang.RuntimeException: org.apache.avro.file.DataFileWriter$AppendWriteException: java.lang.NullPointerException: null of boolean in field lockedQuery of com.mydomain.test1.domain.MyAvroRecord at com.mydomain.avro.App.main(App.java:198) Caused by: org.apache.avro.file.DataFileWriter$AppendWriteException: java.lang.NullPointerException: null of boolean in field lockedQuery of com.mydomain.test1.domain.MyAvroRecord at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:308) EDIT: here is the MyAvroRecord public class MyAvroRecord { long id; String name; String type; Date timestamp; Date lastModifcationDate; String lastModifiedUsername; Boolean lockedQuery;
To be able to set Avro field to null you should allow this in Avro schema, by adding null as one of the possible types of the field. Take a look on example from Avro documentation: { "type": "record", "name": "MyRecord", "fields" : [ {"name": "userId", "type": "long"}, // mandatory field {"name": "userName", "type": ["null", "string"]} // optional field ] } here userName is declared as composite type which could be either null or string. This kind of definition allows to set userName field to null. As contrast userId can only contain long values, hence attempt to set userId to null will result in NullPointerException.
Avro
45,662,469
17
I have a JSON document that I would like to convert to Avro and need a schema to be specified for that purpose. Here is the JSON document for which I would like to define the avro schema: { "uid": 29153333, "somefield": "somevalue", "options": [ { "item1_lvl2": "a", "item2_lvl2": [ { "item1_lvl3": "x1", "item2_lvl3": "y1" }, { "item1_lvl3": "x2", "item2_lvl3": "y2" } ] } ] } I'm able to define the schema for the non-complex types but not for the complex "options" field: { "namespace" : "my.com.ns", "type" : "record", "fields" : [ {"name": "uid", "type": "int"}, {"name": "somefield", "type": "string"} {"name": "options", "type": .....} ] } Thanks for the help!
You need to use Avro complex types, specifically arrays and records. And then nest these together: { "namespace" : "my.com.ns", "name": "myrecord", "type" : "record", "fields" : [ {"name": "uid", "type": "int"}, {"name": "somefield", "type": "string"}, {"name": "options", "type": { "type": "array", "items": { "type": "record", "name": "lvl2_record", "fields": [ {"name": "item1_lvl2", "type": "string"}, {"name": "item2_lvl2", "type": { "type": "array", "items": { "type": "record", "name": "lvl3_record", "fields": [ {"name": "item1_lvl3", "type": "string"}, {"name": "item2_lvl3", "type": "string"} ] } }} ] } }} ] } Also, to improve readiblity, you can split the schema into multiple files.
Avro
28,163,225
16
Is the Avro SpecificRecord (i.e. the generated java classes) compatible with schema evolution? I.e. if I have a source of Avro messages (in my case, kafka) and I want to deserialize those messages to a specificrecord, is it possible to do safely? What I see: adding a field to the end of a schema works fine - can deserialize ok to specificrecord adding a field to the middle does not - i.e. breaks existing clients Even if the messages are compatible, this is a problem. If I can find the new schema (using e.g. confluent schema registry) I can deserialize to GenericRecord, but there doesn't seem to be a way to map from genericrecord to specificrecord of different schema.. MySpecificType message = (T SpecificData.get().deepCopy(MySpecificType.SCHEMA$, genericMessage); Deepcopy is mentioned in various places but it uses index so doesn't work.. Is there any safe way to map between two avro objects when you have both schemas and they are compatible? Even if I could map from genercrecord to genericrecord this would do as I could then do the deepcopy trick to complete the job.
There are example tests here for specific data type conversion. Its all in the configuration 'specificDeserializerProps' https://github.com/confluentinc/schema-registry/blob/master/avro-serializer/src/test/java/io/confluent/kafka/serializers/KafkaAvroSerializerTest.java I added the following config and got the specific type out as wanted. HashMap<String, String> specificDeserializerProps = new HashMap<String, String>(); specificDeserializerProps.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "bogus"); specificDeserializerProps.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, "true"); specificAvroDeserializer = new KafkaAvroDeserializer(schemaRegistry, specificDeserializerProps); Hope that helps
Avro
33,945,383
16
I am trying to create a Kafka Streams Application which processes Avro records, but I am getting the following error: Exception in thread "streams-application-c8031218-8de9-4d55-a5d0-81c30051a829-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Deserialization exception handler is set to fail upon a deserialization error. If you would rather have the streaming pipeline continue after a deserialization error, please set the default.deserialization.exception.handler appropriately. at org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:74) at org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:91) at org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:117) at org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:567) at org.apache.kafka.streams.processor.internals.StreamThread.addRecordsToTasks(StreamThread.java:900) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:801) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:749) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:719) Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1 Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte! I am not sure what is causing this error. I am just trying to get Avro records into the application first where they then will be processed and then output to another topic but it doesn't not seem to be working. I have included the code from the application below. Can anyone see why it is not working? Properties props = new Properties(); props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-application"); props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName()); props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName()); props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081"); props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); Serde<String> stringSerde = Serdes.String(); Serde<trackingReport> specificAvroTrackingReportSerde = new SpecificAvroSerde<trackingReport>(); specificAvroTrackingReportSerde.configure(Collections.singletonMap(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081"), false); StreamsBuilder builder = new StreamsBuilder(); KStream<String, trackingReport> inputreports = builder.stream("intesttopic", Consumed.with(stringSerde, specificAvroTrackingReportSerde)); KStream<String, trackingReport> outputreports = inputreports; String outputTopic = "outtesttopic"; outputreports.to(outputTopic, Produced.with(stringSerde, specificAvroTrackingReportSerde)); Topology topology = builder.build(); KafkaStreams streams = new KafkaStreams(topology, props); streams.start();
Unknown magic byte! Means your data does not adhere to the wire format that's expected for the Schema Registry. Or, in other words, the data you're trying to read, is not Avro, as expected by the Confluent Avro deserializer. You can expect the same error by running kafka-avro-console-consumer, by the way, so you may want to debug using that too If you are sure your data is indeed Avro, and the schema is actually sent as part of the message (would need to see your producer code), then you shouldn't use the Confluent Avro deserializers that are expecting a specific byte format in the message. Instead, you could use ByteArrayDesrializer and read the Avro record yourself, then pass it to the Apache Avro BinaryDecoder class. As a bonus, you can extract that logic into your own Deserialzer class Also, if the input topic is Avro, I don't think you should be using this property for reading strings DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
Avro
53,835,446
16
If I serialize an object using a schema version 1, and later update the schema to version 2 (say by adding a field) - am I required to use schema version 1 when later deserializing the object? Ideally I would like to just use schema version 2 and have the deserialized object have the default value for the field that was added to the schema after the object was originally serialized. Maybe some code will explain better... schema1: {"type": "record", "name": "User", "fields": [ {"name": "firstName", "type": "string"} ]} schema2: {"type": "record", "name": "User", "fields": [ {"name": "firstName", "type": "string"}, {"name": "lastName", "type": "string", "default": ""} ]} using the generic non-code-generation approach: // serialize ByteArrayOutputStream out = new ByteArrayOutputStream(); Encoder encoder = EncoderFactory.get().binaryEncoder(out, null); GenericDatumWriter writer = new GenericDatumWriter(schema1); GenericRecord datum = new GenericData.Record(schema1); datum.put("firstName", "Jack"); writer.write(datum, encoder); encoder.flush(); out.close(); byte[] bytes = out.toByteArray(); // deserialize // I would like to not have any reference to schema1 below here DatumReader<GenericRecord> reader = new GenericDatumReader<GenericRecord>(schema2); Decoder decoder = DecoderFactory.get().binaryDecoder(bytes, null); GenericRecord result = reader.read(null, decoder); results in an EOFException. Using the jsonEncoder results in an AvroTypeException. I know it will work if I pass both schema1 and schema2 to the GenericDatumReader constructor, but I'd like to not have to keep a repository of all previous schemas and also somehow keep track of which schema was used to serialize each particular object. I also tried the code-gen approach, first serializing to a file using the User class generated from schema1: User user = new User(); user.setFirstName("Jack"); DatumWriter<User> writer = new SpecificDatumWriter<User>(User.class); FileOutputStream out = new FileOutputStream("user.avro"); Encoder encoder = EncoderFactory.get().binaryEncoder(out, null); writer.write(user, encoder); encoder.flush(); out.close(); Then updating the schema to version 2, regenerating the User class, and attempting to read the file: DatumReader<User> reader = new SpecificDatumReader<User>(User.class); FileInputStream in = new FileInputStream("user.avro"); Decoder decoder = DecoderFactory.get().binaryDecoder(in, null); User user = reader.read(null, decoder); but it also results in an EOFException. Just for comparison's sake, what I'm trying to do seems to work with protobufs... format: option java_outer_classname = "UserProto"; message User { optional string first_name = 1; } serialize: UserProto.User.Builder user = UserProto.User.newBuilder(); user.setFirstName("Jack"); FileOutputStream out = new FileOutputStream("user.data"); user.build().writeTo(out); add optional last_name to format, regen UserProto, and deserialize: FileInputStream in = new FileInputStream("user.data"); UserProto.User user = UserProto.User.parseFrom(in); as expected, user.getLastName() is the empty string. Can something like this be done with Avro?
Avro and Protocol Buffers have different approaches to handling versioning, and which approach is better depends on your use case. In Protocol Buffers you have to explicitly tag every field with a number, and those numbers are stored along with the fields' values in the binary representation. Thus, as long as you never change the meaning of a number in a subsequent schema version, you can still decode a record encoded in a different schema version. If the decoder sees a tag number that it doesn't recognise, it can simply skip it. Avro takes a different approach: there are no tag numbers, instead the binary layout is completely determined by the program doing the encoding — this is the writer's schema. (A record's fields are simply stored one after another in the binary encoding, without any tagging or separator, and the order is determined by the writer's schema.) This makes the encoding more compact, and saves you from having to manually maintain tags in the schema. But it does mean that for reading, you have to know the exact schema with which the data was written, or you won't be able to make sense of it. If knowing the writer's schema is essential for decoding Avro, the reader's schema is a layer of niceness on top of it. If you're doing code generation in a program that needs to read Avro data, you can do the codegen off the reader's schema, which saves you from having to regenerate it every time the writer's schema changes (assuming it changes in a way that can be resolved). But it doesn't save you from having to know the writer's schema. Pros & Cons Avro's approach is good in an environment where you have lots of records that are known to have the exact same schema version, because you can just include the schema in the metadata at the beginning of the file, and know that the next million records can all be decoded using that schema. This happens a lot in a MapReduce context, which explains why Avro came out of the Hadoop project. Protocol Buffers' approach is probably better for RPC, where individual objects are being sent over the network (as request parameters or return value). If you use Avro here, you may have different clients and different servers all with different schema versions, so you'd have to tag every binary-encoded blob with the Avro schema version it's using, and maintain a registry of schemas. At that point you might as well have used Protocol Buffers' built-in tagging.
Avro
12,165,589
15
i.e. is it possible to make field required similar to ProtoBuf: message SearchRequest { required string query = 1; }
All fields are required in Avro by default. As is mentioned in the official documentation, if you want to make something optional, you have to make it nullable by unioning its type with null, like this { "namespace": "example.avro", "type": "record", "name": "User", "fields": [ {"name": "name", "type": "string"}, {"name": "favorite_number", "type": ["int", "null"]}, {"name": "favorite_color", "type": ["string", "null"]} ] } In this example, name is required, favorite_number and favorite_color are optional. I recommend spending some more time with the documentation.
Avro
31,995,145
15
I need to mix "record" type with null type in Schema. "name":"specShape", "type":{ "type":"record", "name":"noSpecShape", "fields":[ { "name":"bpSsc", "type":"null", "default":null, "doc":"SampleValue: null" },... For example, for some datas specShape may be null. So if I set type to "name":"specShape", "type":{ **"type":["record", "null"],** "name":"noSpecShape", "fields":[ { "name":"bpSsc", "type":"null", "default":null, "doc":"SampleValue: null" },... it says No type: {"type":["record","null"]... But if I set whoole type to "name":"specShape", **"type":[{ "type":"record", "name":"noSpecShape", "fields":[ { "name":"bpSsc", "type":"null", "default":null, "doc":"SampleValue: null" }, "null"]**,... it says Not in union [{"type":"record" How to union these two types?
You had the right idea, you just need to include "null" in the higher level "type" array instead of inside the "fields" array (as in your third example). This is the schema for a nullable record: [ "null", { "type": "record", "name": "NoSpecShape", "fields": [ { "type": "null", "name": "bpSsc", "default": null } ] } ] You can also nest it anywhere a type declaration is expected, for example inside another record: { "type": "record", "name": "SpecShape", "fields": [ { "type": [ "null", { "type": "record", "name": "NoSpecShape", "fields": [ { "type": "null", "name": "bpSsc", "default": null } ] } ], "name": "shape" } ] } JSON-encoded instances of this last schema would look like: {"shape": null} {"shape": {"NoSpecShape": {"bpSsc": null}}}
Avro
36,321,616
15
My KafkaProducer is able to use KafkaAvroSerializer to serialize objects to my topic. However, KafkaConsumer.poll() returns deserialized GenericRecord instead of my serialized class. MyKafkaProducer KafkaProducer<CharSequence, MyBean> producer; try (InputStream props = Resources.getResource("producer.props").openStream()) { Properties properties = new Properties(); properties.load(props); properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroSerializer.class); properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroSerializer.class); properties.put("schema.registry.url", "http://localhost:8081"); MyBean bean = new MyBean(); producer = new KafkaProducer<>(properties); producer.send(new ProducerRecord<>(topic, bean.getId(), bean)); My KafkaConsumer try (InputStream props = Resources.getResource("consumer.props").openStream()) { properties.load(props); properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroDeserializer.class); properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroDeserializer.class); properties.put("schema.registry.url", "http://localhost:8081"); consumer = new KafkaConsumer<>(properties); } consumer.subscribe(Arrays.asList(topic)); try { while (true) { ConsumerRecords<CharSequence, MyBean> records = consumer.poll(100); if (records.isEmpty()) { continue; } for (ConsumerRecord<CharSequence, MyBean> record : records) { MyBean bean = record.value(); // <-------- This is throwing a cast Exception because it cannot cast GenericRecord to MyBean System.out.println("consumer received: " + bean); } } MyBean bean = record.value(); That line throws a cast Exception because it cannot cast GenericRecord to MyBean. I'm using kafka-client-0.9.0.1, kafka-avro-serializer-3.0.0.
KafkaAvroDeserializer supports SpecificData It's not enabled by default. To enable it: properties.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true); KafkaAvroDeserializer does not support ReflectData Confluent's KafkaAvroDeserializer does not know how to deserialize using Avro ReflectData. I had to extend it to support Avro ReflectData: /** * Extends deserializer to support ReflectData. * * @param <V> * value type */ public abstract class ReflectKafkaAvroDeserializer<V> extends KafkaAvroDeserializer { private Schema readerSchema; private DecoderFactory decoderFactory = DecoderFactory.get(); protected ReflectKafkaAvroDeserializer(Class<V> type) { readerSchema = ReflectData.get().getSchema(type); } @Override protected Object deserialize( boolean includeSchemaAndVersion, String topic, Boolean isKey, byte[] payload, Schema readerSchemaIgnored) throws SerializationException { if (payload == null) { return null; } int schemaId = -1; try { ByteBuffer buffer = ByteBuffer.wrap(payload); if (buffer.get() != MAGIC_BYTE) { throw new SerializationException("Unknown magic byte!"); } schemaId = buffer.getInt(); Schema writerSchema = schemaRegistry.getByID(schemaId); int start = buffer.position() + buffer.arrayOffset(); int length = buffer.limit() - 1 - idSize; DatumReader<Object> reader = new ReflectDatumReader(writerSchema, readerSchema); BinaryDecoder decoder = decoderFactory.binaryDecoder(buffer.array(), start, length, null); return reader.read(null, decoder); } catch (IOException e) { throw new SerializationException("Error deserializing Avro message for id " + schemaId, e); } catch (RestClientException e) { throw new SerializationException("Error retrieving Avro schema for id " + schemaId, e); } } } Define a custom deserializer class which deserializes to MyBean: public class MyBeanDeserializer extends ReflectKafkaAvroDeserializer<MyBean> { public MyBeanDeserializer() { super(MyBean.class); } } Configure KafkaConsumer to use the custom deserializer class: properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, MyBeanDeserializer.class);
Avro
39,606,026
15
I am trying to create Union field in Avro schema and send corresponding JSON message with it but to have one of the fields - null. https://avro.apache.org/docs/1.8.2/spec.html#Unions What is example of simplest UNION type (avro schema) with corresponding JSON data? (trying to make example without null/empty data and one with null/empty data).
Here you have an example. Null enum {"name": "Stephanie", "age": 30, "sex": "female", "myenum": null} Not null enum {"name": "Stephanie", "age": 30, "sex": "female", "myenum": "HEARTS"} Schema { "type": "record", "name": "Test", "namespace": "com.acme", "fields": [{ "name": "name", "type": "string" }, { "name": "age", "type": "int" }, { "name": "sex", "type": "string" }, { "name": "myenum", "type": ["null", { "type": "enum", "name": "Suit", "symbols": ["SPADES", "HEARTS", "DIAMONDS", "CLUBS"] } ] } ] }
Avro
50,283,736
15
I would like to use the kafka-avro-console-producer with the schema registry. I have big schemas (over 10k chars) and I can't really past them as a command line argument. Besides that I'd like to use the schema registry directly so I can use a specific schema id. I'm thinking about something like this, but it doesn't work: kafka-avro-console-producer \ --broker-list <broker-list> \ --topic <topic> \ --property schema.registry.url=http://localhost:8081 \ --property value.schema=`curl http://localhost:8081/schemas/ids/419`
For the current version of the CLI tool kafka-avro-console-producer \ --broker-list <broker-list> \ --topic <topic> \ --property schema.registry.url=http://localhost:8081 \ --property value.schema.id=419 For older version You'll need to extract the schema from the API request using jq, for example value.schema="$(curl http://localhost:8081/schemas/ids/419 | jq -r .schema)"
Avro
59,582,230
15
There are a lot of questions and answers on stackoverflow on the subject, but no one that helps. I have a schema with optional value: { "type" : "record", "name" : "UserSessionEvent", "namespace" : "events", "fields" : [ { "name" : "username", "type" : "string" }, { "name" : "errorData", "type" : [ "null", "string" ], "default" : null }] } And I'm trying deserialize json w/o this field: { "username" : "2271AE67-34DE-4B43-8839-07216C5D10E1", "errorData" : { "string":"070226AC-9B91-47CE-85FE-15AA17972298"} } using code: val reader = new GenericDatumReader[GenericRecord](schema) val decoder = DecoderFactory.get().jsonDecoder(schema, json) reader.read(null, decoder) and I got: org.apache.avro.AvroTypeException: Expected field name not found: errorData The only way that works is json { "username" : "2271AE67-34DE-4B43-8839-07216C5D10E1", "errorData" : null } Is there a way to deserialize json w/o this field? Another question: when this field is here, I should write { "username" : "2271AE67-34DE-4B43-8839-07216C5D10E1", "errorData" : { "string":"070226AC-9B91-47CE-85FE-15AA17972298"} } Is there a way to deserialize a "normal" json: { "username" : "2271AE67-34DE-4B43-8839-07216C5D10E1", "errorData" : "070226AC-9B91-47CE-85FE-15AA17972298" } ?
case 1 is working fine in java . { "username" : "2271AE67-34DE-4B43-8839-07216C5D10E1", "errorData" : { "string":"070226AC-9B91-47CE-85FE-15AA17972298"} } for case 2 Your schema is defined for union. You can update you schema as below to deserialize json. { "username" : "2271AE67-34DE-4B43-8839-07216C5D10E1", "errorData" : "070226AC-9B91-47CE-85FE-15AA17972298" } { "type" : "record", "name" : "UserSessionEvent", "namespace" : "events", "fields" : [ { "name" : "username", "type" : "string" }, { "name" : "errorData", "type" : "string" , "default" : null }] }
Avro
38,824,456
14
Is there a way to use a schema to convert avro messages from kafka with spark to dataframe? The schema file for user records: { "fields": [ { "name": "firstName", "type": "string" }, { "name": "lastName", "type": "string" } ], "name": "user", "type": "record" } And code snippets from SqlNetworkWordCount example and Kafka, Spark and Avro - Part 3, Producing and consuming Avro messages to read in messages. object Injection { val parser = new Schema.Parser() val schema = parser.parse(getClass.getResourceAsStream("/user_schema.json")) val injection: Injection[GenericRecord, Array[Byte]] = GenericAvroCodecs.toBinary(schema) } ... messages.foreachRDD((rdd: RDD[(String, Array[Byte])]) => { val sqlContext = SQLContextSingleton.getInstance(rdd.sparkContext) import sqlContext.implicits._ val df = rdd.map(message => Injection.injection.invert(message._2).get) .map(record => User(record.get("firstName").toString, records.get("lastName").toString)).toDF() df.show() }) case class User(firstName: String, lastName: String) Somehow I can't find another way than using a case class to convert AVRO messages to DataFrame. Is there a possibility to use the schema instead? I'm using Spark 1.6.2 and Kafka 0.10. The complete code, in case you're interested. import com.twitter.bijection.Injection import com.twitter.bijection.avro.GenericAvroCodecs import kafka.serializer.{DefaultDecoder, StringDecoder} import org.apache.avro.Schema import org.apache.avro.generic.GenericRecord import org.apache.spark.rdd.RDD import org.apache.spark.sql.SQLContext import org.apache.spark.streaming.kafka._ import org.apache.spark.streaming.{Seconds, StreamingContext, Time} import org.apache.spark.{SparkConf, SparkContext} object ReadMessagesFromKafka { object Injection { val parser = new Schema.Parser() val schema = parser.parse(getClass.getResourceAsStream("/user_schema.json")) val injection: Injection[GenericRecord, Array[Byte]] = GenericAvroCodecs.toBinary(schema) } def main(args: Array[String]) { val brokers = "127.0.0.1:9092" val topics = "test" // Create context with 2 second batch interval val sparkConf = new SparkConf().setAppName("ReadMessagesFromKafka").setMaster("local[*]") val ssc = new StreamingContext(sparkConf, Seconds(2)) // Create direct kafka stream with brokers and topics val topicsSet = topics.split(",").toSet val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers) val messages = KafkaUtils.createDirectStream[String, Array[Byte], StringDecoder, DefaultDecoder]( ssc, kafkaParams, topicsSet) messages.foreachRDD((rdd: RDD[(String, Array[Byte])]) => { val sqlContext = SQLContextSingleton.getInstance(rdd.sparkContext) import sqlContext.implicits._ val df = rdd.map(message => Injection.injection.invert(message._2).get) .map(record => User(record.get("firstName").toString, records.get("lastName").toString)).toDF() df.show() }) // Start the computation ssc.start() ssc.awaitTermination() } } /** Case class for converting RDD to DataFrame */ case class User(firstName: String, lastName: String) /** Lazily instantiated singleton instance of SQLContext */ object SQLContextSingleton { @transient private var instance: SQLContext = _ def getInstance(sparkContext: SparkContext): SQLContext = { if (instance == null) { instance = new SQLContext(sparkContext) } instance } }
Please take a look at this https://github.com/databricks/spark-avro/blob/master/src/test/scala/com/databricks/spark/avro/AvroSuite.scala So instead of val df = rdd.map(message => Injection.injection.invert(message._2).get) .map(record => User(record.get("firstName").toString,records.get("lastName").toString)).toDF() you can try this val df = spark.read.avro(message._2.get)
Avro
39,049,648
14
There are at least two different ways of creating a hive table backed with Avro data: Creating a table based on an Avro schema (in this example, stored in hdfs): CREATE TABLE users_from_avro_schema ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' TBLPROPERTIES ('avro.schema.url'='hdfs:///user/root/avro/schema/user.avsc'); Creating a table by specifying hive columns explicitly with STORED AS AVRO clause: CREATE TABLE users_stored_as_avro( id INT, name STRING ) STORED AS AVRO; Am I correct that in the first case the metadata of users_from_avro_schema table are not stored in Hive Metastore, but inferred from the SERDE class reading the avro schema file? Or maybe the table metadata are stored in the Metastore, added on table's creation, but then what is the policy for synchronising hive metadata with the Avro schema? I mean both cases: updating table metadata (adding/removing columns) and updating Avro schema by changing avro.schema.url property. In the second case when I call DESCRIBE FORMATTED users_stored_as_avro there is no avro.schema.* property defined, so I don't know which Avro schema is used to read/write data. Is it generated dynamically based on the table's metadata stored in the Metastore? This fragment of Programming Hive book discusses inferring info about columns from the SerDe class, but on the other hand HIVE-4703 removes this from deserializer info form columns comments. How can I check then what is the source of column types for a given table (Metastore or Avro schema)?
I decided to publish a complementary answer to those given by @DuduMarkovitz. To make code examples more concise let's clarify that STORED AS AVRO clause is an equivalent of these three lines: ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' Let's take a look then at what happens when we create a table giving a reference to avro schema stored in hdfs. Here is the schema: { "namespace": "io.sqooba", "name": "user", "type": "record", "fields": [ {"name": "id", "type": "int"}, {"name": "name", "type": "string"} ] } We create our table with the following command: CREATE TABLE users_from_avro_schema STORED AS AVRO TBLPROPERTIES ('avro.schema.url'='hdfs:///user/tulinski/user.avsc'); Hive has inferred the schema properly, which we can see by calling: hive> DESCRIBE users_from_avro_schema; OK id int name string Hive Metastore shows us the same (I use @DuduMarkovitz's query): +------------------------+-------------+-------------+-----------+ | tbl_name | column_name | integer_idx | type_name | +------------------------+-------------+-------------+-----------+ | users_from_avro_schema | id | 0 | int | | users_from_avro_schema | name | 1 | string | +------------------------+-------------+-------------+-----------+ So far, so good, everything works as we expect. But let's see what happens when we update avro.schema.url property to point to the next version of our schema (users_v2.avsc), which is as follows: { "namespace": "io.sqooba", "name": "user", "type": "record", "fields": [ {"name": "id", "type": "int"}, {"name": "name", "type": "string"}, {"name": "email", "type": ["null", "string"], "default":null} ] } We simply added another field called email. Now we update a table property pointing to the avro schema in hdfs: ALTER TABLE users_from_avro_schema SET TBLPROPERTIES('avro.schema.url'='hdfs:///user/tulinski/user_v2.avsc'); Has table metadata been changed? hive> DESCRIBE users_from_avro_schema; OK id int name string email string Yeah, cool! But do you expect that Hive Metastore contains this additional column? Unfortunately in Metastore nothing changed: +------------------------+-------------+-------------+-----------+ | tbl_name | column_name | integer_idx | type_name | +------------------------+-------------+-------------+-----------+ | users_from_avro_schema | id | 0 | int | | users_from_avro_schema | name | 1 | string | +------------------------+-------------+-------------+-----------+ I suspect that Hive has the following strategy of inferring schema: It tries to get it from a SerDe class specified for a given table. When SerDe cannot provide the schema Hive looks into the metastore. Let's check that by removing avro.schema.url property: hive> ALTER TABLE users_from_avro_schema UNSET TBLPROPERTIES ('avro.schema.url'); OK Time taken: 0.33 seconds hive> DESCRIBE users_from_avro_schema; OK id int name string Time taken: 0.363 seconds, Fetched: 2 row(s) Describe shows us data stored in the Metastore. Let's modify them by adding a column: ALTER TABLE users_from_avro_schema ADD COLUMNS (phone string); It of course changes Hive Metastore: +------------------------+-------------+-------------+-----------+ | tbl_name | column_name | integer_idx | type_name | +------------------------+-------------+-------------+-----------+ | users_from_avro_schema | id | 0 | int | | users_from_avro_schema | name | 1 | string | | users_from_avro_schema | phone | 2 | string | +------------------------+-------------+-------------+-----------+ But when we set avro.schema.url again back to user_v2.avsc what is in Hive Metastore doesn't matter any more: hive> ALTER TABLE users_from_avro_schema SET TBLPROPERTIES('avro.schema.url'='hdfs:///user/tulinski/user_v2.avsc'); OK Time taken: 0.268 seconds hive> DESCRIBE users_from_avro_schema; OK id int name string email string Avro schema takes precedence over the Metastore. The above example shows that we should rather avoid mixing hive schema changes with avro schema evolution, because otherwise we can easily get into big mess and inconsistency between Hive Metastore and actual schema which is used while reading and writing data. The first inconsistency occurs when we change our avro schema definition by updating avro.schema.url property, but we can live with that if we are aware of Hive strategy of inferring schema. I haven't checked in Hive's source code whether my suspicions about schema logic are correct, but the above example convince me what happens underneath. I extended my answer to show that even when there is a conflict between Avro schema and Hive Metastore data which comply Avro schema can be read. Please have a look again at my example above. Our table definition points to avro schema having three fields: id int name string email string whereas in Hive Metastore there are the following columns: id int name string phone string email vs phone Let's create an avro file containing a single user record complying user_v2.avsc schema. This is its json representation: { "id": 123, "name": "Tomek", "email": {"string": "tomek@tomek"} } To create the avro file we call: java -jar avro-tools-1.8.2.jar fromjson --schema-file user_v2.avsc user_tomek_v2.json > user_tomek_v2.avro We are able to query our table despite the fact that Hive Metastore doesn't contain email column and it contains phone column instead: hive> set hive.cli.print.header=true; hive> select * from users_from_avro_schema; OK users_from_avro_schema.id users_from_avro_schema.name users_from_avro_schema.email 123 Tomek tomek@tomek
Avro
44,256,427
14
I am trying to use the avro-gradle-plugin on github, but have not gotten any luck getting it to work. Does anyone have any sample code on how they get it to work?
I figured out how to do it myself. The following is a snippet that I would like to share for people who might run into the same issues as I did: apply plugin: 'java' apply plugin: 'avro-gradle-plugin' sourceCompatibility = "1.6" targetCompatibility = "1.6" buildscript { repositories { maven { // your maven repo information here } } dependencies { classpath 'org.apache.maven:maven-artifact:2.2.1' classpath 'org.apache.avro:avro-compiler:1.7.1' classpath 'org.apache.avro.gradle:avro-gradle-plugin:1.7.1' } } compileAvro.source = 'src/main/avro' compileAvro.destinationDir = file("$buildDir/generated-sources/avro") sourceSets { main { java { srcDir compileAvro.destinationDir } } } dependencies { compileAvro }
Avro
13,351,334
13
I am using Apache avro for data serialization. Since, the data has a fixed schema I do not want the schema to be a part of serialized data. In the following example, schema is a part of the avro file "users.avro". User user1 = new User(); user1.setName("Alyssa"); user1.setFavoriteNumber(256); User user2 = new User("Ben", 7, "red"); User user3 = User.newBuilder() .setName("Charlie") .setFavoriteColor("blue") .setFavoriteNumber(null) .build(); // Serialize user1 and user2 to disk File file = new File("users.avro"); DatumWriter<User> userDatumWriter = new SpecificDatumWriter<User>(User.class); DataFileWriter<User> dataFileWriter = new DataFileWriter<User (userDatumWriter); dataFileWriter.create(user1.getSchema(), new File("users.avro")); dataFileWriter.append(user1); dataFileWriter.append(user2); dataFileWriter.append(user3); dataFileWriter.close(); Can anyone please tell me how to store avro-files without schema embedded in it?
Here you find a comprehensive how to in which I explain how to achieve the schema-less serialization using Apache Avro. A companion test campaign shows up some figures on the performance that you might expect. The code is on GitHub: example and test classes show up how to use the Data Reader and Writer with a Stub class generated by Avro itself.
Avro
28,808,479
13
I have this avro schema { "namespace": "xx.xxxx.xxxxx.xxxxx", "type": "record", "name": "MyPayLoad", "fields": [ {"name": "filed1", "type": "string"}, {"name": "filed2", "type": "long"}, {"name": "filed3", "type": "boolean"}, { "name" : "metrics", "type": { "type" : "array", "items": { "name": "MyRecord", "type": "record", "fields" : [ {"name": "min", "type": "long"}, {"name": "max", "type": "long"}, {"name": "sum", "type": "long"}, {"name": "count", "type": "long"} ] } } } ] } Here is the code which we use to parse the data public static final MyPayLoad parseBinaryPayload(byte[] payload) { DatumReader<MyPayLoad> payloadReader = new SpecificDatumReader<>(MyPayLoad.class); Decoder decoder = DecoderFactory.get().binaryDecoder(payload, null); MyPayLoad myPayLoad = null; try { myPayLoad = payloadReader.read(null, decoder); } catch (IOException e) { logger.log(Level.SEVERE, e.getMessage(), e); } return myPayLoad; } Now i want to add one more field int the schema so the schema looks like below { "namespace": "xx.xxxx.xxxxx.xxxxx", "type": "record", "name": "MyPayLoad", "fields": [ {"name": "filed1", "type": "string"}, {"name": "filed2", "type": "long"}, {"name": "filed3", "type": "boolean"}, { "name" : "metrics", "type": { "type" : "array", "items": { "name": "MyRecord", "type": "record", "fields" : [ {"name": "min", "type": "long"}, {"name": "max", "type": "long"}, {"name": "sum", "type": "long"}, {"name": "count", "type": "long"} ] } } } {"name": "agentType", "type": ["null", "string"], "default": "APP_AGENT"} ] } Note the filed added and also the default is defined. The problem is that if we receive the data which was written using the older schema i get this error java.io.EOFException: null at org.apache.avro.io.BinaryDecoder.ensureBounds(BinaryDecoder.java:473) ~[avro-1.7.4.jar:1.7.4] at org.apache.avro.io.BinaryDecoder.readInt(BinaryDecoder.java:128) ~[avro-1.7.4.jar:1.7.4] at org.apache.avro.io.BinaryDecoder.readIndex(BinaryDecoder.java:423) ~[avro-1.7.4.jar:1.7.4] at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:229) ~[avro-1.7.4.jar:1.7.4] at org.apache.avro.io.parsing.Parser.advance(Parser.java:88) ~[avro-1.7.4.jar:1.7.4] at org.apache.avro.io.ResolvingDecoder.readIndex(ResolvingDecoder.java:206) ~[avro-1.7.4.jar:1.7.4] at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:152) ~[avro-1.7.4.jar:1.7.4] at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:177) ~[avro-1.7.4.jar:1.7.4] at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:148) ~[avro-1.7.4.jar:1.7.4] at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:139) ~[avro-1.7.4.jar:1.7.4] at com.appdynamics.blitz.shared.util.XXXXXXXXXXXXX.parseBinaryPayload(BlitzAvroSharedUtil.java:38) ~[blitz-shared.jar:na] What i understood from this document that this should have been backward compatible but somehow that doesn't seem to be the case. Any idea what i am doing wrong?
finally i got this working. I need to give both the schemas in the SpecificDatumReader So i modified the parsing like this where i passed both the old and new schema in the reader and it worked like a charm public static final MyPayLoad parseBinaryPayload(byte[] payload) { DatumReader<MyPayLoad> payloadReader = new SpecificDatumReader<>(SCHEMA_V1, SCHEMA_V2); Decoder decoder = DecoderFactory.get().binaryDecoder(payload, null); MyPayLoad myPayLoad = null; try { myPayLoad = payloadReader.read(null, decoder); } catch (IOException e) { logger.log(Level.SEVERE, e.getMessage(), e); } return myPayLoad; }
Avro
34,733,604
13
Avro schemas are defined using JSON. Schemas are composed of primitive types (null, boolean, int, long, float, double, bytes, and string) and complex types (record, enum, array, map, union, and fixed). I want to ask which one is proper for BigDecimal.
Avro introduced logical types in 1.7.7 (I believe) that should help you serialize decimal. https://avro.apache.org/docs/1.8.1/spec.html#Decimal
Avro
38,213,063
13
With the Avro Java API, I can make a simple record schema like: Schema schemaWithTimestamp = SchemaBuilder .record("MyRecord").namespace("org.demo") .fields() .name("timestamp").type().longType().noDefault() .endRecord(); How do I tag a schema field with a logical type, specifically: https://avro.apache.org/docs/1.8.1/api/java/org/apache/avro/LogicalTypes.TimestampMillis.html
Thanks to DontPanic: Schema timestampMilliType = LogicalTypes.timestampMillis().addToSchema(Schema.create(Schema.Type.LONG)); Schema schemaWithTimestamp = SchemaBuilder .record("MyRecord").namespace("org.demo") .fields() .name("timestamp_with_logical_type").type(timestampMilliType).noDefault() .name("timestamp_no_logical_type").type().longType().noDefault() .endRecord(); System.out.println(schemaWithTimestamp.toString(true)); This results in: { "type" : "record", "name" : "MyRecord", "namespace" : "org.demo", "fields" : [ { "name" : "timestamp_with_logical_type", "type" : { "type" : "long", "logicalType" : "timestamp-millis" } }, { "name" : "timestamp_no_logical_type", "type" : "long" } ] }
Avro
43,080,894
13
I have been trying to connect with kafka-avro-console-consumer from Confluent to our legacy Kafka cluster, which was deployed without Confluent Schema Registry. I provided schema explicitly using properties like: kafka-console-consumer --bootstrap-server kafka02.internal:9092 \ --topic test \ --from-beginning \ --property key.schema='{"type":"long"}' \ --property value.schema='{"type":"long"}' but I am getting 'Unknown magic byte!' error with org.apache.kafka.common.errors.SerializationException Is it possible to consume Avro messages from Kafka using Confluent kafka-avro-console-consumer that were not serialized with AvroSerializer from Confluent and with Schema Registry?
The Confluent Schema Registry serialiser/deserializer uses a wire format which includes information about the schema ID etc in the initial bytes of the message. If your message has not been serialized using the Schema Registry serializer, then you won't be able to deserialize it with it, and will get the Unknown magic byte! error. So you'll need to write a consumer that pulls the messages, does the deserialization using your Avro avsc schemas, and then assuming you want to preserve the data, re-serialize it using the Schema Registry serializer Edit: I wrote an article recently that explains this whole thing in more depth: https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained
Avro
52,399,417
13
I have this exception in the consumer when trying to cast the record.value() into java object : ClassCastException: class org.apache.avro.generic.GenericData$Record cannot be cast to class [...].PublicActivityRecord (org.apache.avro.generic.GenericData$Record and [...].PublicActivityRecord are in unnamed module of loader 'app') The producer sends the java object, which is a user defined type named PublicActivityRecord, like this : KafkaProducer<String, PublicActivityRecord> producer = new KafkaProducer<>(createKafkaProperties()); [...] this.producer.send(new ProducerRecord<String, PublicActivityRecord>(myTopic, activityRecord)); this.producer.flush(); At this point I can see in debug mode that the value of the ProducerRecord is indeed of type PublicActivityRecord. On the registry server I can see in the log the POST request of the producer sending the schema : Registering new schema: subject DEV-INF_9325_activityRecord_01-value, version null, id null, type null, schema size 7294 (io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource:262) [2022-01-28 07:01:35,575] INFO 192.168.36.30 - - [28/janv./2022:06:01:34 +0000] "POST /subjects/DEV-INF_9325_activityRecord_01-value/versions HTTP/1.1" 200 8 "-" "Java/11.0.2" POSTsT (io.confluent.rest-utils.requests:62) On the consumer side : protected KafkaConsumer<String, PublicActivityRecord> consumer; [...] consumer = new KafkaConsumer<>(consumerProperties); consumer.subscribe(Stream.of(kafkaConfig.getTopicActivityRecord()).collect(Collectors.toList())); final ConsumerRecords<String, PublicActivityRecord> records = consumer.poll(duration); records.forEach(record -> { [...] PublicActivityRecord activityRecord = record.value(); Here the ClassCastException occurs. In debug mode, I can see that the record.value is indeed of type GenericData$Record. And it can not be cast to PublicActivityRecord. The serializer/deserilizer keys and values are the same : key.deserializer=org.apache.kafka.common.serialization.StringDeserializer value.deserializer=io.confluent.kafka.serializers.KafkaAvroDeserializer And in the schema-registry log, I can see the GET request of the consumer : "GET /schemas/ids/3?fetchMaxId=false HTTP/1.1" 200 8447 "-" "Java/11.0.7" GETsT (io.confluent.rest-utils.requests:62) So I have checked that : the producer sends a message with my own type PublicActivityRecord the message is received in the kafka broker the producer posts the schema to the schema registry the message is taken by the consumer the schema is GET by the consumer from the schema registry the value of the message is of the unexpected GenericData$Record This leads me to the result that what is wrong is in my consumer. So the question is : Why do the consumer get a GenericData record instead of the expected PublicActivityRecord ? Any clue would be much appreciated !
By default, only generic records are returned. You'll need to set value.deserializer.specific.avro.reader=true Or, use the constant in your consumer configs KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG = true
Avro
70,919,159
13
We are using kafka for storing messages and pushing an extremely large number of messages(> 30k in a minute). I am not sure if its relevant but the code that is the producer of the kafka message is in jruby. Serialising and Deserialising the messages also has a performance impact on the system. Can someone help with comparing Avro vs Protocol Buffer in terms of speed of serialisation and deserialisation.
I hate to tell you this, but there is no simple answer to your question. The performance of a serialization format depends on many factors. First of all, performance is a property of implementation more than of the format itself. What you really want to know is how well do the specific JRuby implementations of each format perform (or maybe the Java implementations, if you're just wrapping them). The answer may be wildly different from the answer in other languages, like C++. Additionally, performance will vary depending on how you use the library. Many libraries' APIs offer a trade-off between the "easy, slow" way and the "fast, hard" way. When optimizing, you'll want to carefully study the documentation and look for example code from the libraries' authors to learn about how to squeeze out maximum performance. Finally -- and most importantly -- performance is wildly different depending on the data you are working with. Different formats and implementations optimize for different kinds of data. For instance, string-heavy data is going to exercise very different code paths from number-heavy data. For every format -- even JSON and XML* -- it's always possible to find one use case where they perform better than all the others. Be wary of benchmarks coming from the libraries' authors as these will tend to emphasize use cases favorable to them. Unfortunately, if you really want to know which format will perform better for you, the only way you're going to find out is by writing two versions of your code, one using each library, and comparing them. No external benchmark will be able to give you the real answer. (I'm the author of Protobuf v2 and Cap'n Proto, so I've spent a lot of time looking at serialization benchmarks and thinking about performance.) * Just kidding about XML.
Avro
38,174,180
12
I'm trying to use this avro shcema { "namespace": "nothing", "name": "myAvroSchema", "type": "record", "fields": [ { "name": "checkInCustomerReference", "type": "string" }, { "name": "customerContacts", "type": "record", "fields": [ { "name": "customerEmail", "type": "array", "items": { "type": "record", "name": "customerEmail_element", "fields": [ { "name": "emailAddress", "type": "string" }, { "name": "typeOfEmail", "type": "string" } ] } }, { "name": "customerPhone", "type": "array", "items": { "type": "record", "name": "customerPhone_element", "fields": [ { "name": "fullContactNumber", "type": "string" }, { "name": "ISDCode", "type": "string" } ] } }, { "name": "DonotAskIndicator", "type": "record", "fields": [ { "name": "donotAskDetails", "type": "string" } ] } ] }, { "name": "somethingElseToCheck", "type": "string" } ] } To generate and avro file using the avro-tools: avro-tools fromjson --schema-file myAvroSchema.avsc myJson.json > myAvroData.avro But I am getting the following error message: Exception in thread "main" org.apache.avro.SchemaParseException: "record" is not a defined name. The type of the "customerContacts" field must be a defined name or a {"type": ...} expression. Can anyone tell me why record is not identified as a defined name?
The type of the "customerContacts" field must be a defined name or a {"type": ...} expression Doesn't look like your defining your nested records properly. I reproduced your schema and came out with this, give it a try: { "type":"record", "name":"myAvroSchema", "namespace":"nothing", "fields":[ { "name":"checkInCustomerReference", "type":"string" }, { "name":"customerContacts", "type":{ "type":"record", "name":"customerContacts", "namespace":"nothing", "fields":[ { "name":"customerEmail", "type":{ "type":"array", "items":{ "type":"record", "name":"customerEmail", "namespace":"nothing", "fields":[ { "name":"emailAddress", "type":"string" }, { "name":"typeOfEmail", "type":"string" } ] } } }, { "name":"customerPhone", "type":{ "type":"array", "items":{ "type":"record", "name":"customerPhone", "namespace":"nothing", "fields":[ { "name":"fullContactNumber", "type":"string" }, { "name":"ISDCode", "type":"string" } ] } } }, { "name":"DonotAskIndicator", "type":{ "type":"record", "name":"donotAskIndicator", "namespace":"nothing", "fields":[ { "name":"donotAskDetails", "type":"string" } ] } } ] } }, { "name":"somethingElseToCheck", "type":"string" } ] }
Avro
43,513,140
12
What is the correct way to create avro schema for object with array of strings? I am trying to create avro schema to object that have array of strings according to official documenation? but I get error. https://avro.apache.org/docs/1.8.1/spec.html [ERROR] Failed to execute goal org.apache.avro:avro-maven-plugin:1.8.2:schema (default) on project email: Execution default of goal org.apache.avro:avro-maven-plugin:1.8.2:schema failed: "array" is not a defined name. The type of the "parameters" field must be a defined name or a {"type": ...} expression. -> [Help 1] Why my schema is inccorect? [ { "type": "record", "namespace": "com.example", "name": "Topic", "fields": [ { "name": "subject", "type": "string" }, { "name": "parameters", "type": "array", "items": "string" } ] } ]
Think this should work: { "name":"parameters", "type": { "type": "array", "items": "string" } }
Avro
54,093,898
12
I have a spring application that is my kafka producer and I was wondering why avro is the best way to go. I read about it and all it has to offer, but why can't I just serialize my POJO that I created myself with jackson for example and send it to kafka? I'm saying this because the POJO generation from avro is not so straight forward. On top of it, it requires the maven plugin and an .avsc file. So for example I have a POJO on my kafka producer created myself called User: public class User { private long userId; private String name; public String getName() { return name; } public void setName(String name) { this.name = name; } public long getUserId() { return userId; } public void setUserId(long userId) { this.userId = userId; } } I serialize it and send it to my user topic in kafka. Then I have a consumer that itself has a POJO User and deserialize the message. Is it a matter of space? Is it also not faster to serialize and deserialize this way? Not to mention that there is an overhead of maintaining a schema-registry.
You don't need AVSC, you can use an AVDL file, which basically looks the same as a POJO with only the fields @namespace("com.example.mycode.avro") protocol ExampleProtocol { record User { long id; string name; } } Which, when using the idl-protocol goal of the Maven plugin, will create this AVSC for you, rather than you writing it yourself. { "type" : "record", "name" : "User", "namespace" : "com.example.mycode.avro", "fields" : [ { "name" : "id", "type" : "long" }, { "name" : "name", "type" : "string" } ] } And it'll also place a SpecificData POJO User.java on your classpath for using in your code. If you already had a POJO, you don't need to use AVSC or AVDL files. There are libraries to convert POJOs. For example, you can use Jackson, which is not only for JSON, you would just need to likely create a JacksonAvroSerializer for Kafka, for example, or find if one exists. Avro also has built-in library based on reflection. Confluent Schema Registry serializers have a setting for using reflect based models. So to the question - why Avro (for Kafka)? Well, having a schema is a good thing. Think about RDBMS tables, you can explain the table, and you see all the columns. Move to NoSQL document databases, and they can contain literally anything, and this is the JSON world of Kafka. Let's assume you have consumers in your Kafka cluster that have no idea what is in the topic, they have to know exactly who/what has been produced into a topic. They can try the console consumer, and if it were a plaintext like JSON, then they have to figure out some fields they are interested in, then perform flaky HashMap-like .get("name") operations again and again, only to run into an NPE when a field doesn't exist. With Avro, you clearly define defaults and nullable fields. You aren't required to use a Schema Registry, but it provides that type of explain topic semantics for the RDBMS analogy. It also saves you from needing to send the schema along with every message, and the expense of extra bandwidth on the Kafka topic. The registry is not only useful for Kafka, though, as it could be used for Spark, Flink, Hive, etc for all Data Science analysis surrounding streaming data ingest. Assuming you did want to use JSON, then try using MsgPack instead and you'll likely see an increase in your Kafka throughput and save disk space on the brokers You can also use other formats like Protobuf or Thrift, as Uber has compared
Avro
54,195,813
12
We have a glue crawler that read avro files in S3 and create a table in glue catalog accordingly. The thing is that we have a column named 'foo' that came from the avro schema and we also have something like 'foo=XXXX' in the s3 bucket path, to have Hive partitions. What we did not know is that the crawler will then create a table which now has two columns with the same name, thus our issue while querying the table: HIVE_INVALID_METADATA: Hive metadata for table mytable is invalid: Table descriptor contains duplicate columns Is there a way to tell glue to map the partition 'foo' to another column name like 'bar' ? That way we would avoid having to reprocess our data by specifying a new partition name in the s3 bucket path.. Or any other suggestions ?
Glue Crawlers are pretty terrible, this is just one of the many ways where it creates unusable tables. I think you're better off just creating the tables and partitions with a simple script. Create the table without the foo column, and then write a script that lists your files on S3 do the Glue API calls (BatchCreatePartition), or execute ALTER TABLE … ADD PARTITION … calls in Athena. Whenever new data is added on S3, just add the new partitions with the API call or Athena query. There is no need to do all the work that Glue Crawlers do if you know when and how data is added. If you don't, you can use S3 notificatons to run Lambda functions that do the Glue API calls instead. Almost all solutions are better than Glue Crawlers. The beauty of Athena and Glue Catalog is that it's all just metadata, it's very cheap to throw it all away and recreate it. You can also create as many tables as you want that use the same location, to try out different schemas. In your case there is no need to move any objects on S3, you just need a different table and a different mechanism to add partitions to it.
Avro
59,268,673
12
Is there a way to convert a JSON string to an Avro without a schema definition in Python? Or is this something only Java can handle?
I recently had the same problem, and I ended up developing a python package that can take any python data structure, including parsed JSON and store it in Avro without a need for a dedicated schema. I tested it for python 3. You can install it as pip3 install rec-avro or see the code and docs at https://github.com/bmizhen/rec-avro Usage Example: from fastavro import writer, reader, schema from rec_avro import to_rec_avro_destructive, from_rec_avro_destructive, rec_avro_schema def json_objects(): return [{'a': 'a'}, {'b':'b'}] # For efficiency, to_rec_avro_destructive() destroys rec, and reuses it's # data structures to construct avro_objects avro_objects = (to_rec_avro_destructive(rec) for rec in json_objects()) # store records in avro with open('json_in_avro.avro', 'wb') as f_out: writer(f_out, schema.parse_schema(rec_avro_schema()), avro_objects) #load records from avro with open('json_in_avro.avro', 'rb') as f_in: # For efficiency, from_rec_avro_destructive(rec) destroys rec, and # reuses it's data structures to construct it's output loaded_json = [from_rec_avro_destructive(rec) for rec in reader(f_in)] assert loaded_json == json_objects() To convert a JSON string to json objects use json.loads('{"a":"b"}')
Avro
22,382,636
11
I am running CDH 4.4 with Spark 0.9.0 from a Cloudera parcel. I have a bunch of Avro files that were created via Pig's AvroStorage UDF. I want to load these files in Spark, using a generic record or the schema onboard the Avro files. So far I've tried this: import org.apache.avro.mapred.AvroKey import org.apache.avro.mapreduce.AvroKeyInputFormat import org.apache.hadoop.io.NullWritable import org.apache.commons.lang.StringEscapeUtils.escapeCsv import org.apache.hadoop.fs.Path import org.apache.hadoop.fs.FileSystem import org.apache.hadoop.conf.Configuration import java.net.URI import java.io.BufferedInputStream import java.io.File import org.apache.avro.generic.{GenericDatumReader, GenericRecord} import org.apache.avro.specific.SpecificDatumReader import org.apache.avro.file.DataFileStream import org.apache.avro.io.DatumReader import org.apache.avro.file.DataFileReader import org.apache.avro.mapred.FsInput val input = "hdfs://hivecluster2/securityx/web_proxy_mef/2014/05/29/22/part-m-00016.avro" val inURI = new URI(input) val inPath = new Path(inURI) val fsInput = new FsInput(inPath, sc.hadoopConfiguration) val reader = new GenericDatumReader[GenericRecord] val dataFileReader = DataFileReader.openReader(fsInput, reader) val schemaString = dataFileReader.getSchema val buf = scala.collection.mutable.ListBuffer.empty[GenericRecord] while(dataFileReader.hasNext) { buf += dataFileReader.next } sc.parallelize(buf) This works for one file, but it can't scale - I am loading all the data into local RAM and then distributing it across the spark nodes from there.
To answer my own question: import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ import org.apache.avro.generic.GenericRecord import org.apache.avro.mapred.AvroKey import org.apache.avro.mapred.AvroInputFormat import org.apache.avro.mapreduce.AvroKeyInputFormat import org.apache.hadoop.io.NullWritable import org.apache.commons.lang.StringEscapeUtils.escapeCsv import org.apache.hadoop.fs.FileSystem import org.apache.hadoop.fs.Path import org.apache.hadoop.conf.Configuration import java.io.BufferedInputStream import org.apache.avro.file.DataFileStream import org.apache.avro.io.DatumReader import org.apache.avro.file.DataFileReader import org.apache.avro.file.DataFileReader import org.apache.avro.generic.{GenericDatumReader, GenericRecord} import org.apache.avro.mapred.FsInput import org.apache.avro.Schema import org.apache.avro.Schema.Parser import org.apache.hadoop.mapred.JobConf import java.io.File import java.net.URI // spark-shell -usejavacp -classpath "*.jar" val input = "hdfs://hivecluster2/securityx/web_proxy_mef/2014/05/29/22/part-m-00016.avro" val jobConf= new JobConf(sc.hadoopConfiguration) val rdd = sc.hadoopFile( input, classOf[org.apache.avro.mapred.AvroInputFormat[GenericRecord]], classOf[org.apache.avro.mapred.AvroWrapper[GenericRecord]], classOf[org.apache.hadoop.io.NullWritable], 10) val f1 = rdd.first val a = f1._1.datum a.get("rawLog") // Access avro fields
Avro
23,944,615
11
Im trying to merge avro files into one big file, the problem is concat command does not accept the wildcard hadoop jar avro-tools.jar concat /input/part* /output/bigfile.avro I get: Exception in thread "main" java.io.FileNotFoundException: File does not exist: /input/part* I tried to use "" and '' but no chance.
I quickly checked Avro's source code (1.7.7) and it seems that concat does not support glob patterns (basically, they call FileSystem.open() on each argument except the last one). It means that you have to explicitly provide all the filenames as argument. It is cumbersome, but following command should do what you want: IN=$(hadoop fs -ls /input/part* | awk '{printf "%s ", $NF}') hadoop jar avro-tools.jar concat ${IN} /output/bigfile.avro It would be a nice addition to add support of glob pattern to this command.
Avro
34,856,838
11
When I attempted to run Kafka Consumer with Avro over the data with my respective schema,it returns an error of "AvroRuntimeException: Malformed data. Length is negative: -40" . I see others have had similar issues converting byte array to json, Avro write and read, and Kafka Avro Binary *coder. I have also referenced this Consumer Group Example, which have all been helpful, however no help with this error thus far.. It works up until this part of code (line 73) Decoder decoder = DecoderFactory.get().binaryDecoder(byteArrayInputStream, null); I have tried other decoders and printed out the contents of byteArrayInputStream variable which looks how what I believe you would expect serialized avro data to look (in the message I can see the schema and some data and some malformed data) I have the printed out the Bytes available using .available() method, which returns 594. I am having trouble understanding why this error is happening. Apache Nifi is used to produce the Kafka stream with same schema from hdfs . I would appreciate any help.
Perhaps the problem is a mismatch between how the Avro data is written (encoded) by Nifi vs. how your consumer app is reading (decoding) the data. In a nutshell, Avro's API provides two different approaches to serialization: For creating proper Avro files: To encode the data records but also to embed the Avro schema in a kind of preamble (via org.apache.avro.file.{DataFileWriter/DataFileReader}). Embedding the schema into Avro files makes a lot of sense because (a) typically the "payload" of Avro files is orders of magnitudes larger than the embedded Avro schema and (b) you can then copy or move those files around at your heart's content and still be sure you can read them again without having to consult someone or something. To encode only the data records, i.e. to not embed the schema (via org.apache.avro.io.{BinaryEncoder/BinaryDecoder}; note the difference in the package name: io here vs. file above). This approach is often favored when Avro-encoding messages that are being written to a Kafka topic, for example, because in comparison to variant 1 above you do not incur the overhead of re-embedding the Avro schema into every single message, assuming that your (very reasonable) policy is that, for the same Kafka topic, messages are formatted/encoded with the same Avro schema. This is a significant advantage because, in a stream data context, a data-in-motion data record is typically much smaller (commonly between 100 bytes and few hundred KB) than data-at-rest Avro files as described above (often hundreds or thousands of MB); so the size of the Avro schema is relatively large, and thus you don't want to embed it 2000x when writing 2000 data records to Kafka. The drawback is that you must "somehow" track how Avro schemas map to Kafka topics -- or more precisely, you must somehow track with which Avro schema a message was encoded without going down the path of embedding the schema directly. The good news is that there is tooling available in the Kafka ecosystem (Avro schema registry) for doing this transparently. So in comparison to variant 1, variant 2 gains on efficiency at the expense of convenience. The effect is that the "wire format" for encoded Avro data will look different depending on whether you use (1) or (2) above. I am not very familiar with Apache Nifi, but a quick look at the source code (e.g. ConvertAvroToJSON.java) suggests to me that it is using variant 1, i.e. it embeds the Avro schema alongside the Avro records. Your consumer code, however, uses DecoderFactory.get().binaryDecoder() and thus variant 2 (no schema embedded). Perhaps this explains the error you have been running into?
Avro
36,022,358
11
I am curious to understand the best practice for encoding one very specific type of data within Avro: UUIDs.
Here's how I've been doing it: { "name": "user_id", "type": "string", "logicalType": "UUID" } At the time of writing the logicalType for UUIDs is not documented but it is nonetheless supported, you can check the code here and verify so yourself: https://github.com/apache/avro/blob/branch-1.8/lang/java/avro/src/main/java/org/apache/avro/LogicalTypes.java#L71 And here the docs: https://avro.apache.org/docs/1.10.0/spec.html#UUID
Avro
16,339,441
10
We’re trying to decide between providing generic vs specific record formats for consumption by our clients with an eye to providing an online schema registry clients can access when the schemas are updated. We expect to send out serialized blobs prefixed with a few bytes denoting the version number so schema retrieval from our registry can be automated. Now, we’ve come across code examples illustrating the relative adaptability of the generic format for schema changes but we’re reluctant to give up the type safety and ease-of-use provided by the specific format. Is there a way to obtain the best of both worlds? I.e. could we work with and manipulate the specific generated classes internally and then have them converted them to generic records automatically just before serialization? Clients would then deserialize the generic records (after looking up the schema). Also, could clients convert these generic records they received to specific ones at a later time? Some small code examples would be helpful! Or are we looking at this all the wrong way?
What you are looking for is Confluent Schema registry service and libs which helps to integrate with this. Providing a sample to write Serialize De-serialize avro data with a evolving schema. Please note providing sample from Kafka. import io.confluent.kafka.serializers.KafkaAvroDeserializer; import io.confluent.kafka.serializers.KafkaAvroSerializer; import org.apache.avro.generic.GenericRecord; import org.apache.commons.codec.DecoderException; import org.apache.commons.codec.binary.Hex; import java.util.HashMap; import java.util.Map; public class ConfluentSchemaService { public static final String TOPIC = "DUMMYTOPIC"; private KafkaAvroSerializer avroSerializer; private KafkaAvroDeserializer avroDeserializer; public ConfluentSchemaService(String conFluentSchemaRigistryURL) { //PropertiesMap Map<String, String> propMap = new HashMap<>(); propMap.put("schema.registry.url", conFluentSchemaRigistryURL); // Output afterDeserialize should be a specific Record and not Generic Record propMap.put("specific.avro.reader", "true"); avroSerializer = new KafkaAvroSerializer(); avroSerializer.configure(propMap, true); avroDeserializer = new KafkaAvroDeserializer(); avroDeserializer.configure(propMap, true); } public String hexBytesToString(byte[] inputBytes) { return Hex.encodeHexString(inputBytes); } public byte[] hexStringToBytes(String hexEncodedString) throws DecoderException { return Hex.decodeHex(hexEncodedString.toCharArray()); } public byte[] serializeAvroPOJOToBytes(GenericRecord avroRecord) { return avroSerializer.serialize(TOPIC, avroRecord); } public Object deserializeBytesToAvroPOJO(byte[] avroBytearray) { return avroDeserializer.deserialize(TOPIC, avroBytearray); } } Following classes have all the code you are looking for. io.confluent.kafka.serializers.KafkaAvroDeserializer; io.confluent.kafka.serializers.KafkaAvroSerializer; Please follow the link for more details : http://bytepadding.com/big-data/spark/avro/avro-serialization-de-serialization-using-confluent-schema-registry/
Avro
33,882,095
10
I am new to AVRO and please excuse me if it is a simple question. I have a use case where I am using AVRO schema for record calls. Let's say I have avro schema { "name": "abc", "namepsace": "xyz", "type": "record", "fields": [ {"name": "CustId", "type":"string"}, {"name": "SessionId", "type":"string"}, ] } Now if the input is like { "CustId" : "abc1234" "sessionID" : "000-0000-00000" } I want to use some regex validations for these fields and I want take this input only if it comes in particular format shown as above. Is there any way to specify in avro schema to include regex expression? Any other data serialization formats which supports something like this?
You should be able to use a custom logical type for this. You would then include the regular expressions directly in the schema. For example, here's how you would implement one in JavaScript: var avro = require('avsc'), util = require('util'); /** * Sample logical type that validates strings using a regular expression. * */ function ValidatedString(attrs, opts) { avro.types.LogicalType.call(this, attrs, opts); this._pattern = new RegExp(attrs.pattern); } util.inherits(ValidatedString, avro.types.LogicalType); ValidatedString.prototype._fromValue = function (val) { if (!this._pattern.test(val)) { throw new Error('invalid string: ' + val); } return val; }; ValidatedString.prototype._toValue = ValidatedString.prototype._fromValue; And how you would use it: var type = avro.parse({ name: 'Example', type: 'record', fields: [ { name: 'custId', type: 'string' // Normal (free-form) string. }, { name: 'sessionId', type: { type: 'string', logicalType: 'validated-string', pattern: '^\\d{3}-\\d{4}-\\d{5}$' // Validation pattern. } }, ] }, {logicalTypes: {'validated-string': ValidatedString}}); type.isValid({custId: 'abc', sessionId: '123-1234-12345'}); // true type.isValid({custId: 'abc', sessionId: 'foobar'}); // false You can read more about implementing and using logical types here. Edit: For the Java implementation, I believe you will want to look at the following classes: LogicalType, the base you'll need to extend. Conversion, to perform the conversion (or validation in your case) of the data. LogicalTypes and Conversions, a few examples of existing implementations. TestGenericLogicalTypes, relevant tests which could provide a helpful starting point.
Avro
37,279,096
10
I am evaluating using Apache AVRO for my Jersey REST services. I am using Springboot with Jersey REST. Currently I am accepting JSON as input which are converted to Java Pojos using the Jackson object mapper. I have looked in different places but I cannot find any example that is using Apache AVRO with a Jersey end point. I have found this Github repository (https://github.com/FasterXML/jackson-dataformats-binary/) which has Apache AVRO plugin. I still cannot find any good example as how to integrate this. Has anyone used Apache AVRO with Jersey? If yes, is there any example I can use?
To start , two things need to happen: You need to develop a custom ObjectMapper after the fashion of the Avro schema format You need to supply that custom ObjectMapper to Jersey. That should look something like this: @Provider public class AvroMapperProvider implements ContextResolver<ObjectMapper> { final AvroMapper avroMapper = new AvroMapper(); @Override public ObjectMapper getContext(Class<?> type) { return avroMapper; } } Configure your application to use Jackson as the message handler: public class MyApplication extends ResourceConfig { public MyApplication() { super(JacksonFeature.class,AvroMapperProvider.class); } } Alternatively, you can implement a custom MessageBodyReader and MessageBodyWriter that allows you to directly process the payloads on the way in and out: public class AvroMessageReader implements MessageBodyReader<Person> { AvroSchema schema; final AvroMapper avroMapper = new AvroMapper(); public AvroMessageReader(){ schema = avroMapper.schemaFor(Person.class); //generates an Avro schema from the POJO class. } @Override public boolean isReadable(Class<?> type, Type type1, Annotation[] antns, MediaType mt) { return type == Person.class; //determines that this reader can handle the Person class. } @Override public Person readFrom(Class<Person> type, Type type1, Annotation[] antns, MediaType mt, MultivaluedMap<String, String> mm, InputStream in) throws IOException, WebApplicationException { return avroMapper.reader(schema).readValue(in); } } Here, we generate an avro schema from a hypothetical Person class. The JAX-RS runtime will select this reader based on the response from isReadable. You can then inject the MessageBodyWorkers component into your service implementation class: @Path("app") public static class BodyReaderTest{ @Context private MessageBodyWorkers workers; @POST @Produces("avro/binary") @Consumes("avro/binary") public String processMessage() { workers.getMessageBodyReader(Person.class, Person.class, new Annotation[]{}, MediaType.APPLICATION_JSON_TYPE); } } To answer your last comment: Setting the mime type on your handler to the recommended avro/binary ought to do it.
Avro
45,898,453
10
I recently had a requirement where I needed to generate Parquet files that could be read by Apache Spark using only Java (Using no additional software installations such as: Apache Drill, Hive, Spark, etc.). The files needed to be saved to S3 so I will be sharing details on how to do both. There were no simple to follow guides on how to do this. I'm also not a Java programmer so the concepts of using Maven, Hadoop, etc. were all foreign to me. So it took me nearly two weeks to get this working. I'd like to share my personal guide below on how I achieved this
Disclaimer: The code samples below in no way represent best practices and are only presented as a rough how-to. Dependencies: parquet-avro (1.9.0) : https://mvnrepository.com/artifact/org.apache.parquet/parquet-avro/1.9.0 (We use 1.9.0 because this version uses Avro 1.8+ which supports Decimals and Dates) hadoop-aws (2.8.2) [If you don't plan on writing to S3 you won't need this but you will need to add several other dependencies that normally get added thanks to this. I will not cover that scenario. So even if you're going to generate Parquet files only on your local disk, you can still add this to your project as a dependency]: https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws/2.8.2 (We use this because it was the latest version at the time) Hadoop 2.8.1: https://github.com/steveloughran/winutils/tree/master/hadoop-2.8.1 (We use 2.8.X because it needs to match the hadoop libraries used in the parquet-avro and hadoop-aws dependencies) I'll be using NetBeans as my IDE. Some info regarding parquet in Java (For noobs such as me): In order to serialize your data into parquet, you must choose one of the popular Java data serialization frameworks: Avro, Protocol Buffers or Thrift (I'll be using Avro (1.8.0), as can be seen from our parquet-avro dependency) You will need to use an IDE that supports Maven. This is because the dependencies above have a lot of dependencies of their own. Maven will automatically download those for you (like NuGet for VisualStudio) Pre-requisite: You must have hadoop on the windows machine that will be running the Java code. The good news is you don't need to install the entire hadoop software, rather you need only two files: hadoop.dll winutils.exe These can be downloaded here. You will need version 2.8.1 for this example (due to parquet-avro 1.9.0). Copy these files to C:\hadoop-2.8.1\bin on the target machine. Add a new System Variable (not user variable) called: HADOOP_HOME with the value C:\hadoop-2.8.1 Modify the System Path variable (not user variable) and add the following to the end: %HADOOP_HOME%\bin Restart the machine for changes to take affect. If this config was not done properly you will get the following error at run-time: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z Getting Started with Coding: First create a new empty Maven Project and add parquet-avro 1.9.0 and hadoop-aws 2.8.2 as dependencies: Create your main class where you can write some code First thing is you need to generate a Schema. Now as far as I can tell there is no way you can generate a schema programmatically at run-time. the Schema.Parser class' parse() method only takes a file or a string literal as a parameter and doesn't let you modify the schema once it is created. To circumvent this I am generating my Schema JSON at run time and parsing that. Below is an example Schema: String schema = "{\"namespace\": \"org.myorganization.mynamespace\"," //Not used in Parquet, can put anything + "\"type\": \"record\"," //Must be set as record + "\"name\": \"myrecordname\"," //Not used in Parquet, can put anything + "\"fields\": [" + " {\"name\": \"myInteger\", \"type\": \"int\"}," //Required field + " {\"name\": \"myString\", \"type\": [\"string\", \"null\"]}," + " {\"name\": \"myDecimal\", \"type\": [{\"type\": \"fixed\", \"size\":16, \"logicalType\": \"decimal\", \"name\": \"mydecimaltype1\", \"precision\": 32, \"scale\": 4}, \"null\"]}," + " {\"name\": \"myDate\", \"type\": [{\"type\": \"int\", \"logicalType\" : \"date\"}, \"null\"]}" + " ]}"; Parser parser = new Schema.Parser().setValidate(true); Schema avroSchema = parser.parse(schema); Details on Avro schema can be found here: https://avro.apache.org/docs/1.8.0/spec.html Next we can start generating records (Avro primitive types are simple): GenericData.Record record = new GenericData.Record(avroSchema); record.put("myInteger", 1); record.put("myString", "string value 1"); In order to generate a decimal logical type a fixed or bytes primitive type must be used as the actual data type for storage. The current Parquet format only supports Fixed length byte arrays (aka: fixed_len_byte_array). So we have to use fixed in our case as well (as can be seen in the schema). In Java we must use BigDecimal in order to truly handle decimals. And I've identified that a Decimal(32,4) will not take more than 16 bytes no matter the value. So we will use a standard byte array size of 16 in our serialization below (and in the schema above): BigDecimal myDecimalValue = new BigDecimal("99.9999"); //First we need to make sure the BigDecimal matches our schema scale: myDecimalValue = myDecimalValue.setScale(4, RoundingMode.HALF_UP); //Next we get the decimal value as one BigInteger (like there was no decimal point) BigInteger myUnscaledDecimalValue = myDecimalValue.unscaledValue(); //Finally we serialize the integer byte[] decimalBytes = myUnscaledDecimalValue.toByteArray(); //We need to create an Avro 'Fixed' type and pass the decimal schema once more here: GenericData.Fixed fixed = new GenericData.Fixed(new Schema.Parser().parse("{\"type\": \"fixed\", \"size\":16, \"precision\": 32, \"scale\": 4, \"name\":\"mydecimaltype1\"}")); byte[] myDecimalBuffer = new byte[16]; if (myDecimalBuffer.length >= decimalBytes.length) { //Because we set our fixed byte array size as 16 bytes, we need to //pad-left our original value's bytes with zeros int myDecimalBufferIndex = myDecimalBuffer.length - 1; for(int i = decimalBytes.length - 1; i >= 0; i--){ myDecimalBuffer[myDecimalBufferIndex] = decimalBytes[i]; myDecimalBufferIndex--; } //Save result fixed.bytes(myDecimalBuffer); } else { throw new IllegalArgumentException(String.format("Decimal size: %d was greater than the allowed max: %d", decimalBytes.length, myDecimalBuffer.length)); } //We can finally write our decimal to our record record.put("myDecimal", fixed); For Date values, Avro specifies that we need to save the number of days since EPOCH as an integer. (If you need the time component as well, such as an actual DateTime type, you need to use the Timestamp Avro type, which I will not cover). The easiest way I found to get the number of days since epoch is using the joda-time library. If you added the hadoop-aws dependency to your project you should already have this library. If not you will need to add it yourself: //Get epoch value MutableDateTime epoch = new MutableDateTime(0l, DateTimeZone.UTC); DateTime currentDate = new DateTime(); //Can take Java Date in constructor Days days = Days.daysBetween(epoch, currentDate); //We can write number of days since epoch into the record record.put("myDate", days.getDays()); We finally can start writing our parquet file as such try { Configuration conf = new Configuration(); conf.set("fs.s3a.access.key", "ACCESSKEY"); conf.set("fs.s3a.secret.key", "SECRETKEY"); //Below are some other helpful settings //conf.set("fs.s3a.endpoint", "s3.amazonaws.com"); //conf.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"); //conf.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()); // Not needed unless you reference the hadoop-hdfs library. //conf.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName()); // Uncomment if you get "No FileSystem for scheme: file" errors Path path = new Path("s3a://your-bucket-name/examplefolder/data.parquet"); //Use path below to save to local file system instead //Path path = new Path("data.parquet"); try (ParquetWriter writer = AvroParquetWriter.builder(path) .withSchema(avroSchema) .withCompressionCodec(CompressionCodecName.GZIP) .withConf(conf) .withPageSize(4 * 1024 * 1024) //For compression .withRowGroupSize(16 * 1024 * 1024) //For write buffering (Page size) .build()) { //We only have one record to write in our example writer.write(record); } } catch (Exception ex) { ex.printStackTrace(System.out); } Here is the data loaded into Apache Spark (2.2.0): And for your convenience, the entire source code: package com.mycompany.stackoverflow; import java.math.BigDecimal; import java.math.BigInteger; import java.math.RoundingMode; import org.apache.avro.Schema; import org.apache.avro.generic.GenericData; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.parquet.avro.AvroParquetWriter; import org.apache.parquet.hadoop.ParquetWriter; import org.apache.parquet.hadoop.metadata.CompressionCodecName; import org.joda.time.DateTime; import org.joda.time.DateTimeZone; import org.joda.time.Days; import org.joda.time.MutableDateTime; public class Main { public static void main(String[] args) { System.out.println("Start"); String schema = "{\"namespace\": \"org.myorganization.mynamespace\"," //Not used in Parquet, can put anything + "\"type\": \"record\"," //Must be set as record + "\"name\": \"myrecordname\"," //Not used in Parquet, can put anything + "\"fields\": [" + " {\"name\": \"myInteger\", \"type\": \"int\"}," //Required field + " {\"name\": \"myString\", \"type\": [\"string\", \"null\"]}," + " {\"name\": \"myDecimal\", \"type\": [{\"type\": \"fixed\", \"size\":16, \"logicalType\": \"decimal\", \"name\": \"mydecimaltype1\", \"precision\": 32, \"scale\": 4}, \"null\"]}," + " {\"name\": \"myDate\", \"type\": [{\"type\": \"int\", \"logicalType\" : \"date\"}, \"null\"]}" + " ]}"; Schema.Parser parser = new Schema.Parser().setValidate(true); Schema avroSchema = parser.parse(schema); GenericData.Record record = new GenericData.Record(avroSchema); record.put("myInteger", 1); record.put("myString", "string value 1"); BigDecimal myDecimalValue = new BigDecimal("99.9999"); //First we need to make sure the huge decimal matches our schema scale: myDecimalValue = myDecimalValue.setScale(4, RoundingMode.HALF_UP); //Next we get the decimal value as one BigInteger (like there was no decimal point) BigInteger myUnscaledDecimalValue = myDecimalValue.unscaledValue(); //Finally we serialize the integer byte[] decimalBytes = myUnscaledDecimalValue.toByteArray(); //We need to create an Avro 'Fixed' type and pass the decimal schema once more here: GenericData.Fixed fixed = new GenericData.Fixed(new Schema.Parser().parse("{\"type\": \"fixed\", \"size\":16, \"precision\": 32, \"scale\": 4, \"name\":\"mydecimaltype1\"}")); byte[] myDecimalBuffer = new byte[16]; if (myDecimalBuffer.length >= decimalBytes.length) { //Because we set our fixed byte array size as 16 bytes, we need to //pad-left our original value's bytes with zeros int myDecimalBufferIndex = myDecimalBuffer.length - 1; for(int i = decimalBytes.length - 1; i >= 0; i--){ myDecimalBuffer[myDecimalBufferIndex] = decimalBytes[i]; myDecimalBufferIndex--; } //Save result fixed.bytes(myDecimalBuffer); } else { throw new IllegalArgumentException(String.format("Decimal size: %d was greater than the allowed max: %d", decimalBytes.length, myDecimalBuffer.length)); } //We can finally write our decimal to our record record.put("myDecimal", fixed); //Get epoch value MutableDateTime epoch = new MutableDateTime(0l, DateTimeZone.UTC); DateTime currentDate = new DateTime(); //Can take Java Date in constructor Days days = Days.daysBetween(epoch, currentDate); //We can write number of days since epoch into the record record.put("myDate", days.getDays()); try { Configuration conf = new Configuration(); conf.set("fs.s3a.access.key", "ACCESSKEY"); conf.set("fs.s3a.secret.key", "SECRETKEY"); //Below are some other helpful settings //conf.set("fs.s3a.endpoint", "s3.amazonaws.com"); //conf.set("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider"); //conf.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()); // Not needed unless you reference the hadoop-hdfs library. //conf.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName()); // Uncomment if you get "No FileSystem for scheme: file" errors. Path path = new Path("s3a://your-bucket-name/examplefolder/data.parquet"); //Use path below to save to local file system instead //Path path = new Path("data.parquet"); try (ParquetWriter<GenericData.Record> writer = AvroParquetWriter.<GenericData.Record>builder(path) .withSchema(avroSchema) .withCompressionCodec(CompressionCodecName.GZIP) .withConf(conf) .withPageSize(4 * 1024 * 1024) //For compression .withRowGroupSize(16 * 1024 * 1024) //For write buffering (Page size) .build()) { //We only have one record to write in our example writer.write(record); } } catch (Exception ex) { ex.printStackTrace(System.out); } } }
Avro
47,355,038
10
I need to be able to mark some fields in the AVRO schema so that they will be encrypted at serialization time. A logicalType allows to mark the fields, and together with a custom conversion should allow to let them be encrypted transparently by AVRO. I had some issues to find documentation on how to define and use a new logicalType in AVRO (avro_1.8.2#Logical+Types). I decided then to share here in the answer what I found, to easy the life of anyone else getting on it and to get some feedback in case I'm doing something wrong.
First of all I defined a logicalType as: public class EncryptedLogicalType extends LogicalType { //The key to use as a reference to the type public static final String ENCRYPTED_LOGICAL_TYPE_NAME = "encrypted"; EncryptedLogicalType() { super(ENCRYPTED_LOGICAL_TYPE_NAME); } @Override public void validate(Schema schema) { super.validate(schema); if (schema.getType() != Schema.Type.BYTES) { throw new IllegalArgumentException( "Logical type 'encrypted' must be backed by bytes"); } } } Then a new conversion: public class EncryptedConversion extends Conversion<ByteBuffer> { // Construct a unique instance for all the conversion. This have to be changed in case the conversion // needs some runtime information (e.g.: an encryption key / a tenant_ID). If so, the get() method should // return the appropriate conversion per key. private static final EncryptedConversion INSTANCE = new EncryptedConversion(); public static final EncryptedConversion get(){ return INSTANCE; } private EncryptedConversion(){ super(); } //This conversion operates on ByteBuffer and returns ByteBuffer @Override public Class<ByteBuffer> getConvertedType() { return ByteBuffer.class; } @Override public String getLogicalTypeName() { return EncryptedLogicalType.ENCRYPTED_LOGICAL_TYPE_NAME; } // fromBytes and toBytes have to be overridden as this conversion works on bytes. Other may need to be // overridden. The types supported need to be updated also in EncryptedLogicalType#validate(Schema schema) @Override public ByteBuffer fromBytes(ByteBuffer value, Schema schema, LogicalType type) { encryptedValue = __encryptionLogic__(value); return encryptedValue; } @Override public ByteBuffer toBytes(ByteBuffer value, Schema schema, LogicalType type) { decryptedValue = __decryptionLogic__(value); return decryptedValue; } } The .avsc schema file will be similar to: { "name": “MyMessageWithEncryptedField”, "type": "record", "fields": [ {"name": "payload","type" : {"type" : "bytes","logicalType" : "encrypted"}}, ... Finally in the MyMessageWithEncryptedField.java class generated out of the schema file I added the method to return the conversion: @Override public Conversion<?> getConversion(int fieldIndex) { // This allow us to have a more flexible conversion retrieval, so we don't have to code it per field. Schema fieldSchema = SCHEMA$.getFields().get(fieldIndex).schema(); if ((fieldSchema.getLogicalType() != null) && (fieldSchema.getLogicalType().getName() == EncryptedLogicalType.ENCRYPTED_LOGICAL_TYPE_NAME)){ // here we could pass to the get() method a runtime information, e.g.: a tenantId that can be found in the data structure. return EncryptedConversion.get(); } return null; } To make it run I still have to register the type at runtime: LogicalTypes.register(EncryptedLogicalType.ENCRYPTED_LOGICAL_TYPE_NAME, new LogicalTypes.LogicalTypeFactory() { private final LogicalType encryptedLogicalType = new EncryptedLogicalType(); @Override public LogicalType fromSchema(Schema schema) { return encryptedLogicalType; } }); Few notes: if your logicalType needs some other properties passed in from the schema definition, you can modify the LogicalType class taking example from avro.lang.java.avro.src.main.java.org.apache.avro.LogicalTypes.Decimal the last piece of code (the register) is currently run before my logic starts, but I plan to move it in a static block inside the schema generated class (MyMessageWithEncryptedField.java)
Avro
49,034,266
10
How to use Spring-Kafka to read AVRO message with Confluent Schema registry? Is there any sample? I can't find it in official reference document.
Below code can read the message from customer-avro topic. Here's the AVRO schema on value i have defined as. { "type": "record", "namespace": "com.example", "name": "Customer", "version": "1", "fields": [ { "name": "first_name", "type": "string", "doc": "First Name of Customer" }, { "name": "last_name", "type": "string", "doc": "Last Name of Customer" }, { "name": "age", "type": "int", "doc": "Age at the time of registration" }, { "name": "height", "type": "float", "doc": "Height at the time of registration in cm" }, { "name": "weight", "type": "float", "doc": "Weight at the time of registration in kg" }, { "name": "automated_email", "type": "boolean", "default": true, "doc": "Field indicating if the user is enrolled in marketing emails" } ] } Below is a complete code snippet to read this example with manual commit. import com.example.Customer; import io.confluent.kafka.serializers.KafkaAvroDeserializer; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.common.serialization.StringDeserializer; import java.util.Calendar; import java.util.Collections; import java.util.Properties; public class KafkaAvroJavaConsumerV1Demo { public static void main(String[] args) { Properties properties = new Properties(); // normal consumer properties.setProperty("bootstrap.servers","127.0.0.1:9092"); properties.put("group.id", "customer-consumer-group-v1"); properties.put("auto.commit.enable", "false"); properties.put("auto.offset.reset", "earliest"); // avro part (deserializer) properties.setProperty("key.deserializer", StringDeserializer.class.getName()); properties.setProperty("value.deserializer", KafkaAvroDeserializer.class.getName()); properties.setProperty("schema.registry.url", "http://127.0.0.1:8081"); properties.setProperty("specific.avro.reader", "true"); KafkaConsumer<String, Customer> kafkaConsumer = new KafkaConsumer<>(properties); String topic = "customer-avro"; kafkaConsumer.subscribe(Collections.singleton(topic)); System.out.println("Waiting for data..."); while (true){ System.out.println("Polling at " + Calendar.getInstance().getTime().toString()); ConsumerRecords<String, Customer> records = kafkaConsumer.poll(1000); for (ConsumerRecord<String, Customer> record : records){ Customer customer = record.value(); System.out.println(customer); } kafkaConsumer.commitSync(); } } }
Avro
51,979,389
10
I would like to add HTTPS to my local domain, however we can't do this on localhost. My website goes fine when I run with this Caddyfile localhost:2020 { bind {$ADDRESS} proxy / http://192.168.100.82:9000 { transparent } } But I would like to name this website or at least enable HTTPS on it. According to Caddy, you can't do this on localhost, but what if I have a domain name ? I have tried using my own local address with this Caddyfile 192.168.100.26 { bind {$ADDRESS} proxy / http://192.168.100.82:9000 { transparent } } All works fine but I still don't have HTTPS... And when I try to add a random domain name for example www.mycaddytest.com { bind {$ADDRESS} proxy / http://192.168.100.82:9000 { transparent } } I got the following error Activating privacy features...2016/08/18 11:53:26 [www.mycaddytest.com] failed to get certificate: acme: Error 400 - urn:acme:error:connection - DNS problem: NXDOMAIN looking up A for www.mycaddytest.com Error Detail: Validation for www.mycaddytest.com:80 Resolved to: Used: I know this error is dues to an unexisting domain name, but is there a way to deal with ? Just getting HTTPS on localhost or ip address will be enough
For caddy version 2.4.5, the accepted answer did not work me. What worked is shown below: localhost:443 { reverse_proxy 127.0.0.1:8080 tls internal }
Caddy
39,015,159
17
I have a config file for Caddy v2 like in below: sentry.mydomain.ru { reverse_proxy sentry:9000 } tasks.mydomain.ru { reverse_proxy taiga-proxy:80 } ain.mydomain.ru { reverse_proxy ain-frontend:80 } Caddy makes https for every domain but I need to make disable "https" only for ain.mydomain.ru. How to do it?
Caddy serves http traffic only if you prefix the domain with http scheme. Here is the modified Caddyfile: sentry.mydomain.ru { reverse_proxy sentry:9000 } tasks.mydomain.ru { reverse_proxy taiga-proxy:80 } http://ain.mydomain.ru { reverse_proxy ain-frontend:80 } Reference: https://caddy.community/t/is-there-any-way-to-disable-tls-from-the-caddyfile/8372/2
Caddy
62,896,495
16
I'm using systemd to start a caddy webserver on an ubuntu 16.04 machine. Whenever I run sudo service caddy start and service caddy status, I get this error: ● caddy.service - Caddy webserver Loaded: loaded (/etc/systemd/system/caddy.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Mon 2016-08-29 05:03:02 EDT; 4s ago Docs: https://caddyserver.com/ Process: 1135 ExecStart=/usr/local/bin/caddy -agree -email me@example -pidfile=/var/run/caddy/caddy.pid (code=exited, status Main PID: 1135 (code=exited, status=1/FAILURE) systemd[1]: Started Caddy webserver. caddy[1135]: Activating privacy features... done. caddy[1135]: 2016/08/29 05:03:02 Caddyfile:12 - Parse error: unknown property 'errors' systemd[1]: caddy.service: Main process exited, code=exited, status=1/FAILURE systemd[1]: caddy.service: Unit entered failed state. systemd[1]: caddy.service: Failed with result 'exit-code'.
In my /etc/systemd/system/caddy.service file, I had the following line: Restart=on-failure Commenting that out (with # or ;) and restarting the service showed the underlying problem, which was in my Caddyfile. EDIT: service caddy status only prints a few lines from the log, so sometimes you can find the underlying problem by simply looking at the full log. If using syslog, this is done with: journalctl -u caddy
Caddy
39,202,644
11
"By default, Caddy will bind to ports 80 and 443 to serve HTTPS and redirect HTTP to HTTPS." (https://caddyserver.com/docs/automatic-https) How can we change this port? Background: In our setup, Caddy runs behind an AWS load balancer which forwards requests from port 443 to port 4443. Therefore, we would like to have Caddy listen on 4443. (We use the DNS challenge.)
According to the documentation: The first line of the Caddyfile is always the address of the site to serve. In your Caddyfile: <domain>:<port> Example: localhost:8080
Caddy
51,209,710
11
I wanted to try out Caddy in a docker environment but it does not seem to be able to connect to other containers. I created a network "caddy" and want to run a portainer alongside it. If I go into the volume of caddy, I can see, that there are certs generated, so that seems to work. Also portainer is running and accessible via the Server IP (http://65.21.139.246:1000/). But when I access via the url: https://smallhetzi.fading-flame.com/ I get a 502 and in the log of caddy I can see this message: { "level": "error", "ts": 1629873106.715402, "logger": "http.log.error", "msg": "dial tcp 172.20.0.2:1000: connect: connection refused", "request": { "remote_addr": "89.247.255.231:15146", "proto": "HTTP/2.0", "method": "GET", "host": "smallhetzi.fading-flame.com", "uri": "/", "headers": { "Accept-Encoding": [ "gzip, deflate, br" ], "Accept-Language": [ "de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7" ], "Cache-Control": [ "max-age=0" ], "User-Agent": [ "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" ], "Sec-Fetch-Site": [ "none" ], "Accept": [ "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9" ], "Sec-Fetch-Mode": [ "navigate" ], "Sec-Fetch-User": [ "?1" ], "Sec-Fetch-Dest": [ "document" ], "Sec-Ch-Ua": [ "\"Chromium\";v=\"92\", \" Not A;Brand\";v=\"99\", \"Google Chrome\";v=\"92\"" ], "Sec-Ch-Ua-Mobile": [ "?0" ], "Upgrade-Insecure-Requests": [ "1" ] }, "tls": { "resumed": false, "version": 772, "cipher_suite": 4865, "proto": "h2", "proto_mutual": true, "server_name": "smallhetzi.fading-flame.com" } }, "duration": 0.000580828, "status": 502, "err_id": "pq78d9hen", "err_trace": "reverseproxy.statusError (reverseproxy.go:857)" } But two compose files: Caddy: version: '3.9' services: caddy: image: caddy:2-alpine container_name: caddy restart: unless-stopped ports: - "80:80" - "443:443" volumes: - ./Caddyfile:/etc/caddy/Caddyfile - certs-volume:/data - caddy_config:/config volumes: certs-volume: caddy_config: networks: default: external: name: caddy Caddyfile: { email [email protected] # acme_ca https://acme-staging-v02.api.letsencrypt.org/directory } smallhetzi.fading-flame.com { reverse_proxy portainer:1000 } and my portainer file: version: '3.9' services: portainer: image: portainer/portainer-ce container_name: portainer restart: always volumes: - /var/run/docker.sock:/var/run/docker.sock - portainer_data:/data portainer/portainer entrypoint: /portainer -p :80 ports: - "1000:80" volumes: portainer_data: networks: default: external: name: caddy What I think happens is, that those two containers are somehow not in the same network, but I dont get why. What works as a workaround right now is, when i make this change to my Caddyfile: smallhetzi.fading-flame.com { reverse_proxy 65.21.139.246:1000 } Then I get a valid certificate and the portainer ui. But i would rather not spread the IPs over my Caddyfile. Do I have to configure something else for caddy to run in docker?
I just got help from the forum and it turns out, that caddy redirects to the port INSIDE the container, not the public one. In my case, portainer runs on 80 internally, so changing the Caddyfile to this: smallhetzi.fading-flame.com { reverse_proxy portainer:80 } or this smallhetzi.fading-flame.com { reverse_proxy http://portainer } does the job. This also means, that I could get rid of exposing portainer directly over the port 1000. Now I can only access it via the proxy. Hope someone gets some help from that :)
Caddy
68,918,079
11
I'm recently using gRPC with proto3, and I've noticed that required and optional has been removed in new syntax. Would anyone kindly explain why required/optional are removed in proto3? Such kind of constraints just seem necessary to make definition robust. syntax proto2: message SearchRequest { required string query = 1; optional int32 page_number = 2; optional int32 result_per_page = 3; } syntax proto3: syntax = "proto3"; message SearchRequest { string query = 1; int32 page_number = 2; int32 result_per_page = 3; }
The usefulness of required has been at the heart of many a debate and flame war. Large camps have existed on both sides. One camp liked guaranteeing a value was present and was willing to live with its limitations but the other camp felt required dangerous or unhelpful as it can't be safely added nor removed. Let me explain more of the reasoning why required fields should be used sparingly. If you are already using a proto, you can't add a required field because old application's won't be providing that field and applications in general don't handle the failure well. You can make sure that all old applications are upgraded first, but it can be easy to make a mistake and it doesn't help if you are storing the protos in any datastore (even short-lived, like memcached). The same sort of situation applies when removing a required field. Many required fields were "obviously" required until... they weren't. Let's say you have an id field for a Get method. That is obviously required. Except, later you might need to change the id from int to string, or int32 to int64. That requires adding a new muchBetterId field, and now you are left with the old id field that must be specified, but eventually is completely ignored. When those two problems are combined, the number of beneficial required fields becomes limited and the camps argue over whether it still has value. The opponents of required weren't necessarily against the idea, but its current form. Some suggested developing a more expressive validation library that could check required along with something more advanced like name.length > 10, while also making sure to have a better failure model. Proto3 overall seems to favor simplicity, and required removal is simpler. But maybe more convincing, removing required made sense for proto3 when combined with other features, like removal of field presence for primitives and removal of overriding default values. I'm not a protobuf developer and am in no way authoritative on the subject, but I still hope that the explanation is useful.
gRPC
31,801,257
387
Does the rpc syntax in proto3 allow null requests or responses? e.g. I want the equivalent of the following: rpc Logout; rpc Status returns (Status); rpc Log (LogData); Or should I just create a null type? message Null {}; rpc Logout (Null) returns (Null); rpc Status (Null) returns (Status); rpc Log (LogData) returns (Null);
Kenton's comment below is sound advice: ... we as developers are really bad at guessing what we might want in the future. So I recommend being safe by always defining custom params and results types for every method, even if they are empty. Answering my own question: Looking through the default proto files, I came across Empty that is exactly like the Null type I suggested above :) excerpt from that file: // A generic empty message that you can re-use to avoid defining duplicated // empty messages in your APIs. A typical example is to use it as the request // or the response type of an API method. For instance: // // service Foo { // rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); // } // message Empty { }
gRPC
31,768,665
236
I try to understand protobuf and gRPC and how I can use both. Could you help me understand the following: Considering the OSI model what is where, for example is Protobuf at layer 4? Thinking through a message transfer how is the "flow", what is gRPC doing what protobuf misses? If the sender uses protobuf can the server use gRPC or does gRPC add something which only a gRPC client can deliver? If gRPC can make synchronous and asynchronous communication possible, Protobuf is just for the marshalling and therefore does not have anything to do with state - true or false? Can I use gRPC in a frontend application communicating instead of REST or GraphQL? I already know - or assume I do - that: Protobuf Binary protocol for data interchange Designed by Google Uses generated "Struct" like description at client and server to un-/-marshall message gRPC Uses protobuf (v3) Again from Google Framework for RPC calls Makes use of HTTP/2 as well Synchronous and asynchronous communication possible I again assume its an easy question for someone already using the technology. I still would thank you to be patient with me and help me out. I would also be really thankful for any network deep dive of the technologies.
Protocol buffers is (are?) an Interface Definition Language and serialization library: You define your data structures in its IDL i.e. describe the data objects you want to use It provides routines to translate your data objects to and from binary, e.g. for writing/reading data from disk gRPC uses the same IDL but adds syntax "rpc" which lets you define Remote Procedure Call method signatures using the Protobuf data structures as data types: You define your data structures You add your rpc method definitions It provides code to serve up and call the method signatures over a network You can still serialize the data objects manually with Protobuf if you need to In answer to the questions: gRPC works at layers 5, 6 and 7. Protobuf works at layer 6. When you say "message transfer", Protobuf is not concerned with the transfer itself. It only works at either end of any data transfer, turning bytes into objects Using gRPC by default means you are using Protobuf. You could write your own client that uses Protobuf but not gRPC to interoperate with gRPC, or plugin other serializers to gRPC - but using gRPC would be easier True Yes you can
gRPC
48,330,261
218
The goal is to introduce a transport and application layer protocol that is better in its latency and network throughput. Currently, the application uses REST with HTTP/1.1 and we experience a high latency. I need to resolve this latency problem and I am open to use either gRPC(HTTP/2) or REST/HTTP2. HTTP/2: Multiplexed Single TCP Connection Binary instead of textual Header compression Server Push I am aware of all the above advantages. Question No. 1: If I use REST with HTTP/2, I am sure, I will get a significant performance improvement when compared to REST with HTTP/1.1, but how does this compare with gRPC(HTTP/2)? I am also aware that gRPC uses proto buffer, which is the best binary serialization technique for transmission of structured data on the wire. Proto buffer also helps in developing an language agnostic approach. I agree with that and I can implement the same feature in REST using graphQL. But my concern is over serialization: Question No. 2: When HTTP/2 implements this binary feature, does using proto buffer give an added advantage on top of HTTP/2? Question No. 3: In terms of streaming, bi-directional use-cases, how does gRPC(HTTP/2) compare with (REST and HTTP/2)? There are so many blogs/videos out in the internet that compares gRPC(HTTP/2) with (REST and HTTP/1.1) like this. As stated earlier, I would like to know the differences, benefits on comparing GRPC(HTTP/2) and (REST with HTTP/2).
gRPC is not faster than REST over HTTP/2 by default, but it gives you the tools to make it faster. There are some things that would be difficult or impossible to do with REST. Selective message compression. In gRPC a streaming RPC can decide to compress or not compress messages. For example, if you are streaming mixed text and images over a single stream (or really any mixed compressible content), you can turn off compression for the images. This saves you from compressing already compressed data which won't get any smaller, but will burn up your CPU. First class load balancing. While not an improvement in point to point connections, gRPC can intelligently pick which backend to send traffic to. (this is a library feature, not a wire protocol feature). This means you can send your requests to the least loaded backend server without resorting to using a proxy. This is a latency win. Heavily optimized. gRPC (the library) is under continuous benchmarks to ensure that there are no speed regressions. Those benchmarks are improving constantly. Again, this doesn't have anything to do with gRPC the protocol, but your program will be faster for having used gRPC. As nfirvine said, you will see most of your performance improvement just from using Protobuf. While you could use proto with REST, it is very nicely integrated with gRPC. Technically, you could use JSON with gRPC, but most people don't want to pay the performance cost after getting used to protos.
gRPC
44,877,606
155
I'm reading this explanation of GRPC and this diagram is of interest: How does the transport layer work? If it's over the network... why is it called an RPC? More importantly, how is this different from REST that implements an API for the service-layer (the class in the client that has methods that make a http request)?
The transport layer works using HTTP/2 on top of TCP/IP. It allows for lower latency (faster) connections that can take advantage of a single connection from client to server (which makes more efficient use of connection and can result in more efficient use of server resources. HTTP/2 also supports bidirectional connectivity and asynchronous connectivity. So it is possible for the server to efficiently make contact with client to send messages (async response/notifications, etc..) While, both REST and gRPC can generate client/server stubs (using something like swagger for REST), REST has a limited set of primary 'function' calls (or verbs): +-----------+----------------+ | HTTP Verb | CRUD | +-----------+----------------+ | POST | Create | | GET | Read | | PUT | Update/Replace | | PATCH | Update/Modify | | DELETE | Delete | +-----------+----------------+ whereas gRPC you can define any kind of function calls including synchronous/asynchronous, uni-direction/bidirectional(streams), etc.. Using gRPC the client makes a call to a local method. To the programmer, it looks like you're making a local call, but the underlying layer (the auto-generated client stub) sends the call to the server. To the server it looks like its method was called locally. gRPC takes care of all the underlying plumbing and simplifies the programming paradigm. However, to some dedicated REST purists, this may seem like an over-complication. YMMV
gRPC
43,682,366
114
I'd like to test a gRPC service written in Go. The example I'm using is the Hello World server example from the grpc-go repo. The protobuf definition is as follows: syntax = "proto3"; package helloworld; // The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply) {} } // The request message containing the user's name. message HelloRequest { string name = 1; } // The response message containing the greetings message HelloReply { string message = 1; } And the type in the greeter_server main is: // server is used to implement helloworld.GreeterServer. type server struct{} // SayHello implements helloworld.GreeterServer func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) { return &pb.HelloReply{Message: "Hello " + in.Name}, nil } I've looked for examples but I couldn't find any on how to implement tests for a gRPC service in Go.
I think you're looking for the google.golang.org/grpc/test/bufconn package to help you avoid starting up a service with a real port number, but still allowing testing of streaming RPCs. import "google.golang.org/grpc/test/bufconn" const bufSize = 1024 * 1024 var lis *bufconn.Listener func init() { lis = bufconn.Listen(bufSize) s := grpc.NewServer() pb.RegisterGreeterServer(s, &server{}) go func() { if err := s.Serve(lis); err != nil { log.Fatalf("Server exited with error: %v", err) } }() } func bufDialer(context.Context, string) (net.Conn, error) { return lis.Dial() } func TestSayHello(t *testing.T) { ctx := context.Background() conn, err := grpc.DialContext(ctx, "bufnet", grpc.WithContextDialer(bufDialer), grpc.WithInsecure()) if err != nil { t.Fatalf("Failed to dial bufnet: %v", err) } defer conn.Close() client := pb.NewGreeterClient(conn) resp, err := client.SayHello(ctx, &pb.HelloRequest{"Dr. Seuss"}) if err != nil { t.Fatalf("SayHello failed: %v", err) } log.Printf("Response: %+v", resp) // Test for output here. } The benefit of this approach is that you're still getting network behavior, but over an in-memory connection without using OS-level resources like ports that may or may not clean up quickly. And it allows you to test it the way it's actually used, and it gives you proper streaming behavior. I don't have a streaming example off the top of my head, but the magic sauce is all above. It gives you all of the expected behaviors of a normal network connection. The trick is setting the WithDialer option as shown, using the bufconn package to create a listener that exposes its own dialer. I use this technique all the time for testing gRPC services and it works great.
gRPC
42,102,496
99
I am trying to build a sample application with Go gRPC, but I am unable to generate the code using "protoc" I have installed the required libraries and Go packages using: go get -u google.golang.org/grpc go get -u github.com/golang/protobuf/protoc-gen-go I have tried setting the path as well, but no luck. Sample "proto" file: syntax = "proto3"; package greet; option go_package="greetpb"; service GreetService{} Error message: "protoc-gen-go: program not found or is not executable --go_out: protoc-gen-go: Plugin failed with status code 1."
Go 1.17+ From https://go.dev/doc/go-get-install-deprecation Starting in Go 1.17, installing executables with go get is deprecated. go install may be used instead. ~/.bashrc export GOPATH=$HOME/go export PATH=$PATH:$GOPATH/bin Install go install google.golang.org/protobuf/cmd/protoc-gen-go@latest go: downloading google.golang.org/protobuf v1.27.1 go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest go: downloading google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.2.0 go: downloading google.golang.org/grpc v1.44.0 file.go protoc --go-grpc_out=. *.proto Environment Apple M1 Pro Go version go1.17.8 darwin/arm64
gRPC
57,700,860
96
We want to build a Javascript/HTML gui for our gRPC-microservices. Since gRPC is not supported on the browser side, we thought of using web-sockets to connect to a node.js server, which calls the target service via grpc. We struggle to find an elegant solution to do this. Especially, since we use gRPC streams to push events between our micro-services. It seems that we need a second RPC system, just to communicate between the front end and the node.js server. This seems to be a lot of overhead and additional code that must be maintained. Does anyone have experience doing something like this or has an idea how this could be solved?
Edit: Since Oct 23,2018 the gRPC-Web project is Generally Available, which might be the most official/standardized way to solve your problem. (Even if it's already 2018 now... ;) ) From the GA-Blog: "gRPC-Web, just like gRPC, lets you define the service “contract” between client (web) and backend gRPC services using Protocol Buffers. The client can then be auto generated. [...]" We recently built gRPC-Web (https://github.com/improbable-eng/grpc-web) - a browser client and server wrapper that follows the proposed gRPC-Web protocol. The example in that repo should provide a good starting point. It requires either a standalone proxy or a wrapper for your gRPC server if you're using Golang. The proxy/wrapper modifies the response to package the trailers in the response body so that they can be read by the browser. Disclosure: I'm a maintainer of the project.
gRPC
35,065,875
85
I am getting error while installing grpcio using pip install grpcio on my windows machine.I read here - https://github.com/grpc/grpc/issues/17829 that it may be due to error in a version of setuptools. I upgraded my setuptools to the latest version i.e. 41.0.1 . Still getting the same build error. Its not happening for any other package. I have tried reinstalling pip and python both on my laptop. I'm attaching my error Building wheels for collected packages: grpcio Building wheel for grpcio (setup.py) ... error ERROR: Complete output from command 'c:\python27\python.exe' -u -c 'import setuptools, tokenize;__file__='"'"'c:\\users\\s\\appdata\\local\\temp\\pip-install-ge5zhq\\grpcio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'c:\users\s\appdata\local\temp\pip-wheel-txjhlh' --python-tag cp27: ERROR: Found cython-generated files... running bdist_wheel running build running build_py running build_project_metadata creating python_build creating python_build\lib.win-amd64-2.7 creating python_build\lib.win-amd64-2.7\grpc copying src\python\grpcio\grpc\_auth.py -> python_build\lib.win-amd64-2.7\grpc copying src\python\grpcio\grpc\_channel.py -> python_build\lib.win-amd64-2.7\grpc copying src\python\grpcio\grpc\_common.py -> python_build\lib.win-amd64-2.7\grpc copying src\python\grpcio\grpc\_compression.py -> python_build\lib.win-amd64-2.7\grpc copying src\python\grpcio\grpc\_grpcio_metadata.py -> python_build\lib.win-amd64-2.7\grpc copying src\python\grpcio\grpc\_interceptor.py -> python_build\lib.win-amd64-2.7\grpc copying src\python\grpcio\grpc\_plugin_wrapping.py -> python_build\lib.win-amd64-2.7\grpc copying src\python\grpcio\grpc\_server.py -> python_build\lib.win-amd64-2.7\grpc copying src\python\grpcio\grpc\_utilities.py -> python_build\lib.win-amd64-2.7\grpc copying src\python\grpcio\grpc\__init__.py -> python_build\lib.win-amd64-2.7\grpc creating python_build\lib.win-amd64-2.7\grpc\beta copying src\python\grpcio\grpc\beta\implementations.py -> python_build\lib.win-amd64-2.7\grpc\beta copying src\python\grpcio\grpc\beta\interfaces.py -> python_build\lib.win-amd64-2.7\grpc\beta copying src\python\grpcio\grpc\beta\utilities.py -> python_build\lib.win-amd64-2.7\grpc\beta copying src\python\grpcio\grpc\beta\_client_adaptations.py -> python_build\lib.win-amd64-2.7\grpc\beta copying src\python\grpcio\grpc\beta\_metadata.py -> python_build\lib.win-amd64-2.7\grpc\beta copying src\python\grpcio\grpc\beta\_server_adaptations.py -> python_build\lib.win-amd64-2.7\grpc\beta copying src\python\grpcio\grpc\beta\__init__.py -> python_build\lib.win-amd64-2.7\grpc\beta creating python_build\lib.win-amd64-2.7\grpc\experimental copying src\python\grpcio\grpc\experimental\gevent.py -> python_build\lib.win-amd64-2.7\grpc\experimental copying src\python\grpcio\grpc\experimental\session_cache.py -> python_build\lib.win-amd64-2.7\grpc\experimental copying src\python\grpcio\grpc\experimental\__init__.py -> python_build\lib.win-amd64-2.7\grpc\experimental creating python_build\lib.win-amd64-2.7\grpc\framework copying src\python\grpcio\grpc\frame`enter code here`work\__init__.py -> python_build\lib.win-amd64-2.7\grpc\framework creating python_build\lib.win-amd64-2.7\grpc\_cython copying src\python\grpcio\grpc\_cython\__init__.py -> python_build\lib.win-amd64-2.7\grpc\_cython creating python_build\lib.win-amd64-2.7\grpc\framework\common copying src\python\grpcio\grpc\framework\common\cardinality.py -> python_build\lib.win-amd64-2.7\grpc\framework\common copying src\python\grpcio\grpc\framework\common\style.py -> python_build\lib.win-amd64-2.7\grpc\framework\common copying src\python\grpcio\grpc\framework\common\__init__.py -> python_build\lib.win-amd64-2.7\grpc\framework\common creating python_build\lib.win-amd64-2.7\grpc\framework\foundation copying src\python\grpcio\grpc\framework\foundation\abandonment.py -> python_build\lib.win-amd64-2.7\grpc\framework\foundation copying src\python\grpcio\grpc\framework\foundation\callable_util.py -> python_build\lib.win-amd64-2.7\grpc\framework\foundation copying src\python\grpcio\grpc\framework\foundation\future.py -> python_build\lib.win-amd64-2.7\grpc\framework\foundation copying src\python\grpcio\grpc\framework\foundation\logging_pool.py -> python_build\lib.win-amd64-2.7\grpc\framework\foundation copying src\python\grpcio\grpc\framework\foundation\stream.py -> python_build\lib.win-amd64-2.7\grpc\framework\foundation copying src\python\grpcio\grpc\framework\foundation\stream_util.py -> python_build\lib.win-amd64-2.7\grpc\framework\foundation copying src\python\grpcio\grpc\framework\foundation\__init__.py -> python_build\lib.win-amd64-2.7\grpc\framework\foundation creating python_build\lib.win-amd64-2.7\grpc\framework\interfaces copying src\python\grpcio\grpc\framework\interfaces\__init__.py -> python_build\lib.win-amd64-2.7\grpc\framework\interfaces creating python_build\lib.win-amd64-2.7\grpc\framework\interfaces\base copying src\python\grpcio\grpc\framework\interfaces\base\base.py -> python_build\lib.win-amd64-2.7\grpc\framework\interfaces\base copying src\python\grpcio\grpc\framework\interfaces\base\utilities.py -> python_build\lib.win-amd64-2.7\grpc\framework\interfaces\base copying src\python\grpcio\grpc\framework\interfaces\base\__init__.py -> python_build\lib.win-amd64-2.7\grpc\framework\interfaces\base creating python_build\lib.win-amd64-2.7\grpc\framework\interfaces\face copying src\python\grpcio\grpc\framework\interfaces\face\face.py -> python_build\lib.win-amd64-2.7\grpc\framework\interfaces\face copying src\python\grpcio\grpc\framework\interfaces\face\utilities.py -> python_build\lib.win-amd64-2.7\grpc\framework\interfaces\face copying src\python\grpcio\grpc\framework\interfaces\face\__init__.py -> python_build\lib.win-amd64-2.7\grpc\framework\interfaces\face creating python_build\lib.win-amd64-2.7\grpc\_cython\_cygrpc copying src\python\grpcio\grpc\_cython\_cygrpc\__init__.py -> python_build\lib.win-amd64-2.7\grpc\_cython\_cygrpc creating python_build\lib.win-amd64-2.7\grpc\_cython\_credentials copying src\python\grpcio\grpc\_cython\_credentials\roots.pem -> python_build\lib.win-amd64-2.7\grpc\_cython\_credentials running build_ext error: [Error 2] The system cannot find the file specified ---------------------------------------- ERROR: Failed building wheel for grpcio Running setup.py clean for grpcio Failed to build grpcio
First, upgrade pip pip3 install --upgrade pip Then, update the setup tools: python3 -m pip install --upgrade setuptools At last, install grpcio using : pip3 install --no-cache-dir --force-reinstall -Iv grpcio==<version_number>
gRPC
56,357,794
79
What is the pattern for sending more details about errors to the client using gRPC? For example, suppose I have a form for registering a user, that sends a message message RegisterUser { string email = 1; string password = 2; } where the email has to be properly formatted and unique, and the password must be at least 8 characters long. If I was writing a JSON API, I'd return a 400 error with the following body: { "errors": [{ "field": "email", "message": "Email does not have proper format." }, { "field": "password", "message": "Password must be at least 8 characters." }], } and the client could provide nice error messages to the user (i.e. by highlighting the password field and specifically telling the user that there's something wrong with their input to it). With gRPC is there a way to do something similar? It seems that in most client languages, an error results in an exception being thrown, with no way to grab the response. For example, I'd like something like message ValidationError { string field = 1; string message = 2; } message RegisterUserResponse { repeated ValidationError validation_errors = 1; ... } or similar.
Include additional error details in the response Metadata. However, still make sure to provide a useful status code and message. In this case, you can add RegisterUserResponse to the Metadata. In gRPC Java, that would look like: Metadata.Key<RegisterUserResponse> REGISTER_USER_RESPONSE_KEY = ProtoUtils.keyForProto(RegisterUserResponse.getDefaultInstance()); ... Metadata metadata = new Metadata(); metadata.put(REGISTER_USER_RESPONSE_KEY, registerUserResponse); responseObserver.onError( Status.INVALID_ARGUMENT.withDescription("Email or password malformed") .asRuntimeException(metadata)); Another option is to use the google.rpc.Status proto which includes an additional Any for details. Support is coming to each language to handle the type. In Java, it'd look like: // This is com.google.rpc.Status, not io.grpc.Status Status status = Status.newBuilder() .setCode(Code.INVALID_ARGUMENT.getNumber()) .setMessage("Email or password malformed") .addDetails(Any.pack(registerUserResponse)) .build(); responseObserver.onError(StatusProto.toStatusRuntimeException(status)); google.rpc.Status is cleaner in some languages as the error details can be passed around as one unit. It also makes it clear what parts of the response are error-related. On-the-wire, it still uses Metadata to pass the additional information. You may also be interested in error_details.proto which contains some common types of errors. I discussed this topic during CloudNativeCon. You can check out the slides and linked recording on YouTube.
gRPC
48,748,745
58
go version: go version go1.14 linux/amd64 go.mod module [redacted] go 1.14 require ( github.com/golang/protobuf v1.4.0-rc.2 google.golang.org/grpc v1.27.1 google.golang.org/protobuf v1.20.0 // indirect ) I am running the following command: protoc -I ./src/pbdefs/protos/ --go-grpc_out=. src/pbdefs/protos/*.proto to generate my GRPC output files from .proto files, with I am getting an error protoc-gen-go-grpc: program not found or is not executable Please specify a program using absolute path or make sure the program is available in your PATH system variable --go-grpc_out: protoc-gen-go-grpc: Plugin failed with status code 1.
the missing plugin has been implemented at https://github.com/grpc/grpc-go. command below should fix it go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
gRPC
60,578,892
49
I am attempting to import one protocol buffer message into another, but the imports are not recognized. As long as I don't try to import one protobuf into another, the protobuf code is generated (in java), the code compiles and runs as expected. I'm using: Intellij Idea 2020 v1.3 Unlimited Edition Protobuf Editor plugin: jvolkman/intellij-protobuf-editor (April 2020) Gradle My gradle build file looks like this: plugins { id 'java' id 'com.google.protobuf' version "0.8.8" } group 'tech.tablesaw' version '1.0-SNAPSHOT' sourceCompatibility = 9.0 def grpcVersion = '1.30.1' // CURRENT_GRPC_VERSION def protobufVersion = '3.12.0' def protocVersion = protobufVersion repositories { mavenCentral() } test { useJUnitPlatform() } dependencies { implementation "io.grpc:grpc-protobuf:${grpcVersion}" implementation "io.grpc:grpc-stub:${grpcVersion}" compileOnly "org.apache.tomcat:annotations-api:6.0.53" // advanced - need this for JsonFormat implementation "com.google.protobuf:protobuf-java-util:${protobufVersion}" runtimeOnly "io.grpc:grpc-netty-shaded:${grpcVersion}" testImplementation "io.grpc:grpc-testing:${grpcVersion}" compile group: 'tech.tablesaw', name: 'tablesaw-core', version: '0.38.1' testCompile group: 'org.junit.jupiter', name: 'junit-jupiter-engine', version: '5.6.2' testImplementation "org.mockito:mockito-core:2.28.2" } protobuf { protoc { artifact = "com.google.protobuf:protoc:${protocVersion}" } plugins { grpc { artifact = "io.grpc:protoc-gen-grpc-java:${grpcVersion}" } } generateProtoTasks { all()*.plugins { grpc {} } } } // Inform IDEs like IntelliJ IDEA, Eclipse or NetBeans about the generated code. sourceSets { main { java { srcDirs 'build/generated/source/proto/main/grpc' srcDirs 'build/generated/source/proto/main/java' } } } task TablesawServer(type: CreateStartScripts) { mainClassName = 'tech.tablesaw.service.TableServiceServer' applicationName = 'tablesaw-table-server' outputDir = new File(project.buildDir, 'tmp') } task TablesawClient(type: CreateStartScripts) { mainClassName = 'tech.tablesaw.service.TableServiceClient' applicationName = 'tablesaw-table-client' outputDir = new File(project.buildDir, 'tmp') } and my gradle info looks like this: ------------------------------------------------------------ Gradle 5.1.1 ------------------------------------------------------------ Build time: 2019-01-10 23:05:02 UTC Revision: 3c9abb645fb83932c44e8610642393ad62116807 Kotlin DSL: 1.1.1 Kotlin: 1.3.11 Groovy: 2.5.4 Ant: Apache Ant(TM) version 1.9.13 compiled on July 10 2018 JVM: 9.0.4 (Oracle Corporation 9.0.4+11) OS: Mac OS X 10.13.5 x86_64 Here is an example protobuf. the import of the column_type.proto fails. syntax = "proto3"; package tech.tablesaw.service.common; import "tech/tablesaw/service/common/column_type.proto"; option java_multiple_files = true; option java_package = "tech.tablesaw.service.common"; option java_outer_classname = "ColumnMetaProto"; option objc_class_prefix = "TSW"; // Proto file describing column metadata message. // A column metadata object message ColumnMetadata { string name = 1; int32 size = 2; ColumnTypeEnum.ColumnType column_type = 3; } And here is the file i'm trying to import: syntax = "proto3"; package tech.tablesaw.service.common; option java_multiple_files = true; option java_package = "tech.tablesaw.service.common"; option java_outer_classname = "ColumnTypeEnum"; option objc_class_prefix = "TSW"; enum ColumnType { SHORT = 0; INTEGER = 1; LONG = 2; FLOAT = 3; BOOLEAN = 4; STRING = 5; DOUBLE = 6; LOCAL_DATE = 7; LOCAL_TIME = 8; LOCAL_DATE_TIME = 9; INSTANT = 10; TEXT = 11; SKIP = 12; } Finally, here's where the protobufs sit in the file system. src > main > java > proto > tech > tablesaw > service > common > column_metadata.proto > column_type.proto
Take a look at the readme which describes how to add additional paths. By default, intellij-protobuf-editor uses the project's configured source roots as protobuf import paths. If this isn't correct, you can override these paths in Settings > Languages & Frameworks > Protocol Buffers. Uncheck "Configure automatically" and add whichever paths you need. In your case, you'd add .../src/main/java/proto (where ... means whatever your project's base path is).
gRPC
62,837,953
44
I recently started reading and employing gRPC in my work. gRPC uses protocol-buffers internally as its IDL and I keep reading everywhere that protocol-buffers perform much better, faster as compared to JSON and XML. What I fail to understand is - how do they do that? What design in protocol-buffers actually makes them perform faster compared to XML and JSON?
String representations of data: require text encode/decode (which can be cheap, but is still an extra step) requires complex parse code, especially if there are human-friendly rules like "must allow whitespace" usually involves more bandwidth - so more actual payload to churn - due to embedding of things like names, and (again) having to deal with human-friendly representations (how to tokenize the syntax, for example) often requires lots of intermediate string instances that are used for member-lookups etc Both text-based and binary-based serializers can be fast and efficient (or slow and horrible)... just: binary serializers have the scales tipped in their advantage. This means that a "good" binary serializer will usually be faster than a "good" text-based serializer. Let's compare a basic example of an integer: json: {"id":42} 9 bytes if we assume ASCII or UTF-8 encoding and no whitespace. xml: <id>42</id> 11 bytes if we assume ASCII or UTF-8 encoding and no whitespace - and no namespace noise like namespaces. protobuf: 0x08 0x2a 2 bytes Now imagine writing a general purpose xml or json parser, and all the ambiguities and scenarios you need to handle just at the text layer, then you need to map the text token "id" to a member, then you need to do an integer parse on "42". In protobuf, the payload is smaller, plus the math is simple, and the member-lookup is an integer (so: suitable for a very fast switch/jump).
gRPC
52,146,721
40
According to documentation: deprecated (field option): If set to true, indicates that the field is deprecated and should not be used by new code. Example of use: message Foo { string old_field = 1 [deprecated=true]; } How we can deprecate the whole message?
You can set deprecated as a top level option on the message: message Foo { option deprecated = true; string old_field = 1; }
gRPC
52,781,727
36
I want to create a simple gRPC endpoint which the user can upload his/her picture. The protocol buffer declaration is the following: message UploadImageRequest { AuthToken auth = 1; // An enum with either JPG or PNG FileType image_format = 2; // Image file as bytes bytes image = 3; } Is this approach of uploading pictures (and recieving pictures) still ok regardless of the warning in the gRPC documentation? And if not, is the better approach (standard) to upload pictures using the standard form and storing the image file location instead?
For large binary transfers, the standard approach is chunking. Chunking can serve two purposes: reduce the maximum amount of memory required to process each message provide a boundary for recovering partial uploads. For your use-case #2 probably isn't very necessary. In gRPC, a client-streaming call allows for fairly natural chunking since it has flow control, pipelining, and is easy to maintain context in the client and server code. If you care about recovery of partial uploads, then bidirectional-streaming works well since the server can be responding with acknowledgements of progress that the client can use to resume. Chunking using individual RPCs is also possible, but has more complications. When load balancing, the backend may be required to coordinate with other backends each chunk. If you upload the chunks serially, then the latency of the network can slow upload speed as you spend most of the time waiting to receive responses from the server. You then either have to upload in parallel (but how many in parallel?) or increase the chunk size. But increasing the chunk size increases the memory required to process each chunk and increases the granularity for recovering failed uploads. Parallel upload also requires the server to handle out-of-order uploads.
gRPC
34,969,446
34
Good evening everyone, I have only been dealing with Java and Android Studio for a few months, can someone help me to solve this error? It occurs every time the emulator starts. Thank you Emulator: Started GRPC server at 127.0.0.1:8554 Emulator: emulator: WARNING: EmulatorService.cpp:448: Cannot find certfile: C:\Users\Sawye.android\emulator-grpc.cer security will be disabled.
A quick fix: From the main navbar menu Tools > Android > SDK Manager > Android SDK > SDK Tools You'll then see the screen below where you can select '- Android Emulator Hypervisor Driver for AMD Processors (installer) version 1.3.0' I am not sure what the actual root cause of the issue is, but this patched the issue for me and may help other people.
gRPC
60,306,645
33
I have a go grpc service. I'm developing on a mac, sierra. When running a grpc client against the service locally, all is well, but when running same client against same service in the docker container I get this error: transport: http2Client.notifyError got notified that the client transport was broken EOF. FATA[0000] rpc error: code = Internal desc = transport is closing this is my docker file: FROM golang:1.7.5 RUN mkdir -p /go/src/github.com/foo/bar WORKDIR /go/src/github.com/foo/bar COPY . /go/src/github.com/foo/bar # ONBUILD RUN go-wrapper download RUN go install ENTRYPOINT /go/bin/bar EXPOSE 51672 my command to build the image: docker build -t bar . my command to launch the docker container: docker run -p 51672:51672 --name bar-container bar Other info: client program runs fine from within the docker container connecting to a regular rest endpoint works fine (http2, grpc related?) running the lsof command in OS X yields these results $lsof -i | grep 51672 com.docke 984 oldDave 21u IPv4 0x72779547e3a32c89 0t0 TCP *:51672 (LISTEN) com.docke 984 oldDave 22u IPv6 0x72779547cc0fd161 0t0 TCP localhost:51672 (LISTEN) here's a snippet of my server code: server := &Server{} endpoint := "localhost:51672" lis, err := net.Listen("tcp", endpoint) if err != nil { log.Fatalf("failed to listen: %v", err) } s := grpc.NewServer(grpc.Creds(creds)) pb.RegisterExpServiceServer(s, server) // Register reflection service on gRPC server. reflection.Register(s) log.Info("Starting Exp server: ", endpoint) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) }
When you specify a hostname or IP address​ to listen on (in this case localhost which resolves to 127.0.0.1), then your server will only listen on that IP address. Listening on localhost isn't a problem when you are outside of a Docker container. If your server only listens on 127.0.0.1:51672, then your client can easily connect to it since the connection is also made from 127.0.0.1. When you run your server inside a Docker container, it'll only listen on 127.0.0.1:51672 as before. The 127.0.0.1 is a local loopback address and it not accessible outside the container. When you fire up the docker container with "-p 51672:51672", it'll forward traffic heading to 127.0.0.1:51672 to the container's IP address, which in my case is 172.17.0.2. The container gets an IP addresses within the docker0 network interface (which you can see with the "ip addr ls" command) So, when your traffic gets forwarded to the container on 172.17.0.2:51672, there's nothing listening there and the connection attempt fails. The fix: The problem is with the listen endpoint: endpoint := "localhost:51672" To fix your problem, change it to endpoint := ":51672" That'll make your server listen on all it container's IP addresses. Additional info: When you expose ports in a Docker container, Docker will create iptables rules to do the actual forwarding. See this. You can view these rules with: iptables -n -L iptables -t nat -n -L
gRPC
43,911,793
32
Let's consider a simple service: service Something { rpc Do(Request) returns Response; } message Request { string field = 1; } message Response { string response = 1; } Assume I have to do some checking on the Request.field, I want to raise a client error if the field is invalid: class MyService(proto_pb2.SomethingServicer): def Do(self, request, context): if not is_valid_field(request.field): raise ValueError("Damn!") # Or something like that return proto_pb2.Response(response="Yeah!") With the following client: channel = grpc.insecure_channel(...) stub = proto_pb2.SomethingStub(channel) try: response = stub.Do(proto_pb2.Request(field="invalid")) except grpc.RpcError as e: print(e) <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, Exception calling application: Damn!)> So I can technically handle errors. My issue is... is there a better way? Is there a good way to change the message description? Can we change the status code?
Yes, there is a better way. You may change the status details using the ServicerContext.set_details method and you may change the status code using the ServicerContext.set_code method. I suspect that your servicer will look something like class MyService(proto_pb2.SomethingServicer): def Do(self, request, context): if not is_valid_field(request.field): context.set_code(grpc.StatusCode.INVALID_ARGUMENT) context.set_details('Consarnit!') return proto_pb2.Response() return proto_pb2.Response(response='Yeah!') .
gRPC
40,998,199
31
I have a reasonable experience in developing both SOAP and REST web services (in java platform). I am trying to understand the difference between the gRPC and CORBA in every aspect apart from the fact that both enables platform-neutral way of communication in distributed environment. where and how is the Goal/Purpose of these two concepts differ anyway?
gRPC and CORBA share very similar concepts and building blocks: Client/Server architecture with Interface Definition Language (IDL) to generate client Stubs and server Skeletons, standard data interchangeable format and bindings for multiple programming languages. CORBA uses the OMG's IDL for defining object interfaces and GIOP to standardize the message interchangeable format. gRPC uses the ProtocolBuffer's IDL for defining the message formats and rpc service interfaces. The IIOP (TCP/IP protocol) is the most common GIOP implementation used for CORBA, while gRPC has implemented its transport protocol on top of HTTP/2. One significant difference is the support for remote object references (or remote services for gRPC). While CORBA supports the notion of remote object references (e.g. you can pass a remote object reference in your service call), the gRPC allows only data message structures as service call arguments. The Transport protocol is often seen as an important distinction too! CORBA uses GIOP/IIOP - a TCP/IP based protocol while gRPC uses HTTP/2 transport. Later is consider friendlier for the Internet infrastructures (e.g. firewalls, proxies ...).
gRPC
44,452,399
30
Hey I'm trying make a small test client with Go and Grpc, opts := grpc.WithInsecure() cc, err := grpc.Dial("localhost:9950", opts) if err != nil { log.Fatal(err) } The WithInsecure() function call gives a warning: grpc.WithInsecure is deprecated: use insecure.NewCredentials() instead. I'm not sure how to use this new function call is there an example somewhere? Thanks
The function insecure.NewCredentials returns an implementation of credentials.TransportCredentials. You can use it as a DialOption with grpc.WithTransportCredentials: grpc.Dial(":9950", grpc.WithTransportCredentials(insecure.NewCredentials()))
gRPC
70,482,508
29
I have been trying for 3 days by now to find how to install and use gRPC on windows with no luck. I am using Visual Studio 2015, Win7 64-bit. To be safe, I'll write step by step of what I am doing. It might not be necessary but I am a beginner with C++ and with VS so I am not at all sure I am doing it correctly: (following guide http://www.infopulse.com/blog/grpc-framework-by-google-tutorial/): Get gRPC from git, init submodules Get gmock and gtest for protobuf (not in the guide, but else it doesn't work) Run cmake on protobuf Build protobuf.sln in Visual Studio in Release mode (set for each part of the solution Property Manager > C/C++ > Code Generation > Runtime Library > /MDd) Copy Release/ folder to protobuf/cmake/ (instead of Debug/ as in intructions - that would give me libprotobufd.lib instead of libprotobuf.lib which is required) Build grpc/vsprojects/grpc_protoc_plugins.sln in VS also in Release mode and again set for each part of the solution Property Manager > C/C++ > Code Generation > Runtime Library > /MDd Copy protoc.exe from protobuf/cmake/Release to grpc/vsprojects/Release, which was created in previous step Build grpc/vsprojects/grpc.sln in VS in Debug mode (only the grpc++ part as I read somewhere and again set for each part of the solution Property Manager > C/C++ > Code Generation > Runtime Library > /MDd) So far things are going well. Generate c files from proto in example folder. I get helloworld.grpc.pb.cc, helloworld.grpc.pb.h, helloworld.pb.cc and helloworld.pb.h and move them all to grpc/examples/cpp/helloworld: protoc --grpc_out=./hello_proto --plugin=protoc-gen-grpc=grpc_cpp_plugin.exe ../../examples/protos/helloworld.proto --proto_path=../../examples/protos protoc --cpp_out=./hello_proto ../../examples/protos/helloworld.proto --proto_path=../../examples/protos I keep the grpc.sln open in VS and 'Add' > 'New Project' To the new project 'Add' > 'Existing Item' and add greeter_client.cc from grpc/examples/cpp Add dependencies as: https://github.com/grpc/grpc/issues/4707 , with Includes going in C/C++ > Additional Include Directories When I try to build my project errors are reported with not finding gflags, gtest, and libprotobuf. If I find it all and move them to an included folder, then I get these errors: 1>------ Build started: Project: greeter_client, Configuration: Debug Win32 ------ 1> greeter_client.cc 1>libprotobuf.lib(generated_message_util.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libprotobuf.lib(generated_message_util.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libprotobuf.lib(common.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libprotobuf.lib(common.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libprotobuf.lib(once.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libprotobuf.lib(once.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libprotobuf.lib(status.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libprotobuf.lib(status.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libprotobuf.lib(int128.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libprotobuf.lib(int128.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libprotobuf.lib(atomicops_internals_x86_msvc.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libprotobuf.lib(atomicops_internals_x86_msvc.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>grpc++.lib(client_context.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>grpc++.lib(insecure_credentials.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>grpc++.lib(create_channel.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>grpc++.lib(credentials.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>grpc++.lib(create_channel_internal.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>grpc++.lib(channel_arguments.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>grpc++.lib(core_codegen.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>grpc++.lib(codegen_init.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>grpc++.lib(status.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: bool __thiscall std::ios_base::good(void)const " (?good@ios_base@std@@QBE_NXZ) already defined in grpc++.lib(channel_arguments.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: int __thiscall std::ios_base::flags(void)const " (?flags@ios_base@std@@QBEHXZ) already defined in grpc++.lib(channel_arguments.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: __int64 __thiscall std::ios_base::width(void)const " (?width@ios_base@std@@QBE_JXZ) already defined in grpc++.lib(channel_arguments.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: __int64 __thiscall std::ios_base::width(__int64)" (?width@ios_base@std@@QAE_J_J@Z) already defined in grpc++.lib(channel_arguments.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: int __thiscall std::basic_streambuf<char,struct std::char_traits<char> >::sputc(char)" (?sputc@?$basic_streambuf@DU?$char_traits@D@std@@@std@@QAEHD@Z) already defined in grpc++.lib(channel_arguments.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: __int64 __thiscall std::basic_streambuf<char,struct std::char_traits<char> >::sputn(char const *,__int64)" (?sputn@?$basic_streambuf@DU?$char_traits@D@std@@@std@@QAE_JPBD_J@Z) already defined in grpc++.lib(channel_arguments.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: void __thiscall std::basic_ios<char,struct std::char_traits<char> >::setstate(int,bool)" (?setstate@?$basic_ios@DU?$char_traits@D@std@@@std@@QAEXH_N@Z) already defined in grpc++.lib(channel_arguments.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: class std::basic_ostream<char,struct std::char_traits<char> > * __thiscall std::basic_ios<char,struct std::char_traits<char> >::tie(void)const " (?tie@?$basic_ios@DU?$char_traits@D@std@@@std@@QBEPAV?$basic_ostream@DU?$char_traits@D@std@@@2@XZ) already defined in grpc++.lib(channel_arguments.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: class std::basic_streambuf<char,struct std::char_traits<char> > * __thiscall std::basic_ios<char,struct std::char_traits<char> >::rdbuf(void)const " (?rdbuf@?$basic_ios@DU?$char_traits@D@std@@@std@@QBEPAV?$basic_streambuf@DU?$char_traits@D@std@@@2@XZ) already defined in grpc++.lib(channel_arguments.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: char __thiscall std::basic_ios<char,struct std::char_traits<char> >::fill(void)const " (?fill@?$basic_ios@DU?$char_traits@D@std@@@std@@QBEDXZ) already defined in grpc++.lib(channel_arguments.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: char __thiscall std::basic_ios<char,struct std::char_traits<char> >::widen(char)const " (?widen@?$basic_ios@DU?$char_traits@D@std@@@std@@QBEDD@Z) already defined in grpc++.lib(channel_arguments.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: void __thiscall std::basic_ostream<char,struct std::char_traits<char> >::_Osfx(void)" (?_Osfx@?$basic_ostream@DU?$char_traits@D@std@@@std@@QAEXXZ) already defined in libprotobuf.lib(status.obj) 1>msvcprtd.lib(MSVCP140D.dll) : error LNK2005: "public: class std::basic_ostream<char,struct std::char_traits<char> > & __thiscall std::basic_ostream<char,struct std::char_traits<char> >::flush(void)" (?flush@?$basic_ostream@DU?$char_traits@D@std@@@std@@QAEAAV12@XZ) already defined in libprotobuf.lib(status.obj) 1>libcpmt.lib(ios.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(ios.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(ios.obj) : error LNK2005: "public: static void __cdecl std::ios_base::_Addstd(class std::ios_base *)" (?_Addstd@ios_base@std@@SAXPAV12@@Z) already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(ios.obj) : error LNK2005: "private: static void __cdecl std::ios_base::_Ios_base_dtor(class std::ios_base *)" (?_Ios_base_dtor@ios_base@std@@CAXPAV12@@Z) already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(locale0.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(locale0.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(locale0.obj) : error LNK2005: "void __cdecl std::_Facet_Register(class std::_Facet_base *)" (?_Facet_Register@std@@YAXPAV_Facet_base@1@@Z) already defined in msvcprtd.lib(locale0_implib.obj) 1>libcpmt.lib(locale0.obj) : error LNK2005: "private: static class std::locale::_Locimp * __cdecl std::locale::_Getgloballocale(void)" (?_Getgloballocale@locale@std@@CAPAV_Locimp@12@XZ) already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(locale0.obj) : error LNK2005: "private: static class std::locale::_Locimp * __cdecl std::locale::_Init(bool)" (?_Init@locale@std@@CAPAV_Locimp@12@_N@Z) already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(locale0.obj) : error LNK2005: "public: static void __cdecl std::_Locinfo::_Locinfo_ctor(class std::_Locinfo *,char const *)" (?_Locinfo_ctor@_Locinfo@std@@SAXPAV12@PBD@Z) already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(locale0.obj) : error LNK2005: "public: static void __cdecl std::_Locinfo::_Locinfo_dtor(class std::_Locinfo *)" (?_Locinfo_dtor@_Locinfo@std@@SAXPAV12@@Z) already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(iosptrs.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(iosptrs.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(locale.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(locale.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(xlock.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(xlock.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(xlock.obj) : error LNK2005: "public: __thiscall std::_Lockit::_Lockit(int)" (??0_Lockit@std@@QAE@H@Z) already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(xlock.obj) : error LNK2005: "public: __thiscall std::_Lockit::~_Lockit(void)" (??1_Lockit@std@@QAE@XZ) already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(xthrow.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(xthrow.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(xthrow.obj) : error LNK2005: "void __cdecl std::_Xbad_alloc(void)" (?_Xbad_alloc@std@@YAXXZ) already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(xthrow.obj) : error LNK2005: "void __cdecl std::_Xlength_error(char const *)" (?_Xlength_error@std@@YAXPBD@Z) already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(xthrow.obj) : error LNK2005: "void __cdecl std::_Xout_of_range(char const *)" (?_Xout_of_range@std@@YAXPBD@Z) already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(wlocale.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(wlocale.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(xlocale.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(xlocale.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(xdateord.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(xdateord.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(xwctomb.obj) : error LNK2005: __Getcvt already defined in msvcprtd.lib(MSVCP140D.dll) 1>libcpmt.lib(winapisupp.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(winapisupp.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(StlCompareStringA.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(StlCompareStringA.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(winapinls.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(winapinls.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(StlCompareStringW.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(StlCompareStringW.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(StlLCMapStringW.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(StlLCMapStringW.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>libcpmt.lib(StlLCMapStringA.obj) : error LNK2038: mismatch detected for '_ITERATOR_DEBUG_LEVEL': value '0' doesn't match value '2' in greeter_client.obj 1>libcpmt.lib(StlLCMapStringA.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MT_StaticRelease' doesn't match value 'MDd_DynamicDebug' in greeter_client.obj 1>LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs; use /NODEFAULTLIB:library 1>LINK : warning LNK4098: defaultlib 'LIBCMTD' conflicts with use of other libs; use /NODEFAULTLIB:library 1>libeay32.lib(c_zlib.obj) : warning LNK4217: locally defined symbol _deflate imported in function _BIO_f_zlib 1>libeay32.lib(c_zlib.obj) : warning LNK4217: locally defined symbol _deflateEnd imported in function _BIO_f_zlib 1>libeay32.lib(c_zlib.obj) : warning LNK4217: locally defined symbol _inflate imported in function _BIO_f_zlib 1>libeay32.lib(c_zlib.obj) : warning LNK4217: locally defined symbol _inflateEnd imported in function _BIO_f_zlib 1>libeay32.lib(c_zlib.obj) : warning LNK4217: locally defined symbol _deflateInit_ imported in function _BIO_f_zlib 1>libeay32.lib(c_zlib.obj) : warning LNK4217: locally defined symbol _inflateInit_ imported in function _BIO_f_zlib 1>libeay32.lib(c_zlib.obj) : warning LNK4217: locally defined symbol _zError imported in function _zlib_zfree 1>greeter_client.obj : error LNK2019: unresolved external symbol "public: __thiscall helloworld::HelloRequest::HelloRequest(void)" (??0HelloRequest@helloworld@@QAE@XZ) referenced in function "public: class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __thiscall GreeterClient::SayHello(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)" (?SayHello@GreeterClient@@QAE?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@ABV23@@Z) 1>greeter_client.obj : error LNK2019: unresolved external symbol "public: virtual __thiscall helloworld::HelloRequest::~HelloRequest(void)" (??1HelloRequest@helloworld@@UAE@XZ) referenced in function "public: class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __thiscall GreeterClient::SayHello(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)" (?SayHello@GreeterClient@@QAE?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@ABV23@@Z) 1>greeter_client.obj : error LNK2019: unresolved external symbol "public: __thiscall helloworld::HelloReply::HelloReply(void)" (??0HelloReply@helloworld@@QAE@XZ) referenced in function "public: class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __thiscall GreeterClient::SayHello(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)" (?SayHello@GreeterClient@@QAE?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@ABV23@@Z) 1>greeter_client.obj : error LNK2019: unresolved external symbol "public: virtual __thiscall helloworld::HelloReply::~HelloReply(void)" (??1HelloReply@helloworld@@UAE@XZ) referenced in function "public: class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __thiscall GreeterClient::SayHello(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)" (?SayHello@GreeterClient@@QAE?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@ABV23@@Z) 1>greeter_client.obj : error LNK2019: unresolved external symbol "public: virtual class grpc::Status __thiscall helloworld::Greeter::Stub::SayHello(class grpc::ClientContext *,class helloworld::HelloRequest const &,class helloworld::HelloReply *)" (?SayHello@Stub@Greeter@helloworld@@UAE?AVStatus@grpc@@PAVClientContext@5@ABVHelloRequest@3@PAVHelloReply@3@@Z) referenced in function "public: class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > __thiscall GreeterClient::SayHello(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &)" (?SayHello@GreeterClient@@QAE?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@ABV23@@Z) 1>greeter_client.obj : error LNK2019: unresolved external symbol "public: static class std::unique_ptr<class helloworld::Greeter::Stub,struct std::default_delete<class helloworld::Greeter::Stub> > __cdecl helloworld::Greeter::NewStub(class std::shared_ptr<class grpc::ChannelInterface> const &,class grpc::StubOptions const &)" (?NewStub@Greeter@helloworld@@SA?AV?$unique_ptr@VStub@Greeter@helloworld@@U?$default_delete@VStub@Greeter@helloworld@@@std@@@std@@ABV?$shared_ptr@VChannelInterface@grpc@@@4@ABVStubOptions@grpc@@@Z) referenced in function "public: __thiscall GreeterClient::GreeterClient(class std::shared_ptr<class grpc::Channel>)" (??0GreeterClient@@QAE@V?$shared_ptr@VChannel@grpc@@@std@@@Z) 1>grpc++.lib(create_channel_internal.obj) : error LNK2019: unresolved external symbol "public: virtual __thiscall grpc::Channel::~Channel(void)" (??1Channel@grpc@@UAE@XZ) referenced in function "public: virtual void * __thiscall grpc::Channel::`scalar deleting destructor'(unsigned int)" (??_GChannel@grpc@@UAEPAXI@Z) 1>grpc++.lib(create_channel_internal.obj) : error LNK2019: unresolved external symbol "private: __thiscall grpc::Channel::Channel(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,struct grpc_channel *)" (??0Channel@grpc@@AAE@ABV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@PAUgrpc_channel@@@Z) referenced in function "class std::shared_ptr<class grpc::Channel> __cdecl grpc::CreateChannelInternal(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> > const &,struct grpc_channel *)" (?CreateChannelInternal@grpc@@YA?AV?$shared_ptr@VChannel@grpc@@@std@@ABV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@3@PAUgrpc_channel@@@Z) 1>C:\Tool\C++gRPC\grpc\vsprojects\\Debug\gflagsd.lib : warning LNK4272: library machine type 'x64' conflicts with target machine type 'X86' 1>C:\Tool\C++gRPC\grpc\vsprojects\Debug\greeter_client.exe : fatal error LNK1120: 8 unresolved externals ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== If anyone could offer a solution, I will be grateful.
After struggling with this for some time myself, I found that vcpkg does a very good job building gRPC C++ for Windows. Note the requirements are Window 7 or later and VS2015 Update 3 or later. Note that you can configure it the way you want it by using a triplet, e.g. .\vcpkg.exe install grpc --triplet x86-windows-static
gRPC
39,982,065
27
grpc-java uses an executor in its ServerBuilder, which if not defined by the builder.executor() method, uses a static cached thread pool by default. What is the exact use of this executor? Does it just execute the handler methods or does it do “something else” as well? Also, how does grpc define the netty worker EventLoopGroup? Specifically, I want to know how the worker threads are assigned to this worker group. Is there a default for the number of threads, or is it a function of the number of cores of the machine? Also, in relation to the above question, how do these netty workers work with the executor? Do they handle just the I/O - reading and writing to the channel? Edit: Netty, by default creates (2 * number of cores) worker threads.
The Executor that you provide is what actually executes the callbacks of the rpc. This frees up the EventLoop to continue processing data on the connection. When a new message arrives from the network, it is read on the event loop, and then propagated up the stack to the executor. The executor takes the messages and passes them to your ServerCall.Listener which will actually do the processing of the data. By default, gRPC uses a cached thread pool so that it is very easy to get started. However it is strongly recommended you provide your own executor. The reason is that the default threadpool behaves badly under load, creating new threads when the rest are busy. In order to set up the event loop group, you call the workerEventLoopGroup method on NettyServerBuilder. gRPC is not strictly dependent on Netty (other server transports are possible) so the Netty specific builder must be used.
gRPC
42,408,634
27
I am using grpc for message passing and am testing a simple server and client. When my message size goes over the limit, I get this error. grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.INVALID_ARGUMENT, Received message larger than max (7309898 vs. 4194304))> How do I increase the message size on the server and client side?
Changing the message_length for both send and receive will do the trick. channel = grpc.insecure_channel( 'localhost:50051', options=[ ('grpc.max_send_message_length', MAX_MESSAGE_LENGTH), ('grpc.max_receive_message_length', MAX_MESSAGE_LENGTH), ], )
gRPC
42,629,047
27
When using gRPC from Java, can I cache stubs (clients) and call them in a multi-threaded environment or are the channels thread-safe and can be safely cached? If there is a network outage, should I recreate the channel or it is smart enough to reconnect? I couldn't find relevant info on http://www.grpc.io/docs/ Thanks
Answer to first question: Channels are thread safe; io.grpc.Channel is marked with @ThreadSafe annotation. Stubs are also thread-safe, which is why reconfiguration creates a new stub. Answer to second question: If there is a network outage, you don't need to recreate the channel. The channel will reconnect with exponential backoff, roughly as described by the connection backoff doc. Java does not 100% conform to that algorithm, because it doesn't increase connection timeouts in later retries. (Not to be confused with the exponential backoff, which is implemented.)
gRPC
33,197,669
25
I know we are comparing 2 different technologies, but I would like to know pros and cons of both. WCF is present for almost a decade now. Didn't anything similar exist in java world until now?
At a very high level they would both appear to address the same tooling space. However, the differences I can pick up on: GRPC does not use SOAP to mediate between client and service over http. WCF supports SOAP. GRPC is only concerned with RPC style communication. WCF supports and promotes REST and POX style services in addition to RPC. GRPC provides support for multiple programming languages. WCF supports C# (and the other .net languages). GRPC uses protobuf for on-wire serialization, WCF uses either XML/JSON or windows binary. GRPC is open source - EDIT: So is WCF now: https://devblogs.microsoft.com/dotnet/corewcf-v1-released/ In short: GRPC seems a much more focused services framework, it does one job really well and on multiple platforms. WCF much more general purpose, but limited to .net for the time being (WCF is being ported to .net core but at time of writing only client side functionality is on .net core)
gRPC
35,694,273
25
I am specifying a number of independent gRPC services that will all be hosted out of the same server process. Each service is defined in its own protobuf file. These are then run through the gRPC tools to give me the target language (c# in my case) in which I can then implement my server and client. Each of those separate APIs uses a number of common elements, things like error response enumerations, the Empty message type (which seems to be available in the gRPC WellKnownTypes; but I cannot see how I include that either so I defined my own). At the moment I end up with each proto building duplicate enums and classes into their own namespace. Though I know I can share the definitions in a common proto file and include that; I do not see how to end up with only a single code gen of these into a common namespace. Though this works it would be neater to keep it to one set; it may also have issues later in conversion and equivalence if doing things like aggregating errors across services. I assume I am missing something as my reading of things such as the WellKnownTypes namespace suggests that this should be possible but, as mentioned before, I don't see how I refer to that in the Proto either. SO seems pretty light on gRPC for the moment so my searches are not turning up much, and I am new to this so any pointers?
Protocol buffers solve this problem by using a different package identifier. Each message will be placed in a different Protocol buffer specific package, which is independent of the C# namespace. For example: // common.proto syntax "proto3"; package my.api.common; option csharp_namespace = "My.Api.Common"; message Shared { // ... } And then in the service specific file: // service1.proto syntax "proto3"; package my.api.service1; import "common.proto"; option csharp_namespace = "My.Api.Service1"; message Special { my.api.common.Shared shared = 1; } You need to make sure there is only one copy of the common proto, otherwise they could get out of sync. You can put common messages in the common.proto file and reference them from each of your specific other proto files.
gRPC
40,631,796
25
I was going through this code of gRPC server. Can anyone tell me the need for reflection used here Code : func main() { lis, err := net.Listen("tcp", port) if err != nil { log.Fatalf("failed to listen: %v", err) } s := grpc.NewServer() pb.RegisterGreeterServer(s, &server{}) // Register reflection service on gRPC server. reflection.Register(s) if err := s.Serve(lis); err != nil { log.Fatalf("failed to serve: %v", err) } }
Server reflection is not necessary to run the helloworld example. The helloworld example is also used as a server reflection example, that's why you see the reflection registering code there. More about server reflection: Server reflection is a service defined to provides information about publicly-accessible gRPC services on a gRPC server. Tutorial available here: https://github.com/grpc/grpc-go/blob/master/Documentation/server-reflection-tutorial.md
gRPC
41,424,630
25
I have to add a custom header in an android grpc client. I am unable to send it successfully. public class HeaderClientInterceptor implements ClientInterceptor { @Override public < ReqT, RespT > ClientCall < ReqT, RespT > interceptCall(MethodDescriptor < ReqT, RespT > method, CallOptions callOptions, Channel next) { return new SimpleForwardingClientCall < ReqT, RespT > (next.newCall(method, callOptions)) { @Override public void start(Listener < RespT > responseListener, Metadata headers) { /* put custom header */ Timber.d("header sending to server:"); Metadata fixedHeaders = new Metadata(); Metadata.Key < String > key = Metadata.Key.of("Grps-Matches-Key", Metadata.ASCII_STRING_MARSHALLER); fixedHeaders.put(key, "primary.secondary"); headers.merge(fixedHeaders); super.start(new SimpleForwardingClientCallListener < RespT > (responseListener) { @Override public void onHeaders(Metadata headers) { /** * if you don't need receive header from server, * you can use {@link io.grpc.stub.MetadataUtils attachHeaders} * directly to send header */ Timber.e("header received from server:" + headers.toString()); super.onHeaders(headers); } }, headers); } }; } } EDIT: Added the custom header using this way successfully Now in my grpc call, I am calling like this ClientInterceptor interceptor = new HeaderClientInterceptor(); Channel channel = ManagedChannelBuilder.forAddress(BuildConfig.HOST, BuildConfig.PORT).build(); Channel channelWithHeader = ClientInterceptors.intercept(channel, interceptor); ServiceGrpc.ServiceBlockingStub service = ServiceGrpc.newBlockingStub(channelWithHeader); I have built the above request and calling it in the pseudo call as below. Iterator<Model> dataItems = service.getItems(SOMERequestBuilderObj); I am trying to add a custom header like this "Grps-Matches-Key : primary.secondary" In Rest API call I would have added this as a header like builder.header("Grps-Matches-Key", "primary.secondary"); Hope this helps.
The edited version in the question works too.In GRPC there are many ways to add headers (called meta data) . We can add meta data like in my question above using interceptor or we can add meta data for the client stub or you can add it before making request in the client stub channel . // create a custom header Metadata header=new Metadata(); Metadata.Key<String> key = Metadata.Key.of("Grps-Matches-Key", Metadata.ASCII_STRING_MARSHALLER); header.put(key, "match.items"); // create client stub ServiceGrpc.ServiceBlockingStub stub = ServiceGrpc .newBlockingStub(channel); Add the header before making any request as shown here using MetadataUtils stub.withInterceptors(MetadataUtils.newAttachHeadersInterceptor(header))
gRPC
45,125,601
25
I have a use case where many clients need to keep sending a lot of metrics to the server (almost perpetually). The server needs to store these events, and process them later. I don't expect any kind of response from the server for these events. I'm thinking of using grpc for this. Initially, I thought client-side streaming would do (like how envoy does), but the issue is that client side streaming cannot ensure reliable delivery at application level (i.e. if the stream closed in between, how many messages that were sent were actually processed by the server) and I can't afford this. My thought process is, I should either go with bidi streaming, with acks in the server stream, or multiple unary rpc calls (perhaps with some batching of the events in a repeated field for performance). Which of these would be better?
the issue is that client side streaming cannot ensure reliable delivery at application level (i.e. if the stream closed in between, how many messages that were sent were actually processed by the server) and I can't afford this This implies you need a response. Even if the response is just an acknowledgement, it is still a response from gRPC's perspective. The general approach should be "use unary," unless large enough problems can be solved by streaming to overcome their complexity costs. I discussed this at 2018 CloudNativeCon NA (there's a link to slides and YouTube for the video). For example, if you have multiple backends then each unary RPC may be sent to a different backend. That may cause a high overhead for those various backends to synchronize themselves. A streaming RPC chooses a backend at the beginning and continues using the same backend. So streaming might reduce the frequency of backend synchronization and allow higher performance in the service implementation. But streaming adds complexity when errors occur, and in this case it will cause the RPCs to become long-lived which are more complicated to load balance. So you need to weigh whether the added complexity from streaming/long-lived RPCs provides a large enough benefit to your application. We don't generally recommend using streaming RPCs for higher gRPC performance. It is true that sending a message on a stream is faster than a new unary RPC, but the improvement is fixed and has higher complexity. Instead, we recommend using streaming RPCs when it would provide higher application (your code) performance or lower application complexity.
gRPC
56,766,921
25
We need to convert Google Proto buffer time stamp to a normal date. In that circumstance is there any way to convert Google Proto buffer timestamp to a Java LocalDate directly?
tl;dr As a moment in UTC, convert to java.time.Instant. Then apply a time zone to get a ZonedDateTime. Extract the date-only portion as a LocalDate. One-liner: Instant .ofEpochSecond( ts.getSeconds() , ts.getNanos() ) .atZone( ZoneId.of( "America/Montreal" ) ) .toLocalDate() Convert First step is to convert the Timestamp object’s count of seconds and fractional second (nanoseconds) to the java.time classes. Specifically, java.time.Instant. Just like Timestamp, an Instant represents a moment in UTC with a resolution of nanoseconds. Instant instant = Instant.ofEpochSecond( ts.getSeconds() , ts.getNanos() ) ; Determining a date requires a time zone. For any given moment, the date varies around the globe by zone. Apply a ZoneId to our Instant to get a ZonedDateTime. Same moment, same point on the timeline, different wall-clock time. ZoneId z = ZoneId( "Pacific/Auckland" ) ; ZonedDateTime zdt = instant.atZone( z ) ; Extract the date-only portion as a LocalDate. A LocalDate has no time-of-day and no time zone. LocalDate ld = zdt.toLocalDate() ; Caution: Do not use LocalDateTime class for this purpose, as unfortunately shown in another Answer. That class purposely lacks any concept of time zone or offset-from-UTC. As such it cannot represent a moment, is not a point on the timeline. See class documentation. Convert Best to entirely avoid the terribly troublesome legacy date-time classes including Date, Calendar, SimpleDateFormat. But if you must interoperate with old code not yet updated to java.time you can convert back-and-forth. Call new conversion methods added to the old classes. GregorianCalendar gc = GregorianCalendar.from( zdt ) ; To represent a date-only value as a GregorianCalendar we must specify a time-of-day and a time zone. You’ll likely want to use the first moment of the day as the time-of-day component. Never assume the first moment is 00:00:00. Anomalies such as Daylight Saving Time mean the first moment might be another time such as 01:00:00. Let java.time determine first moment. ZonedDateTime firstMomentOfDay = ld.atZone( z ) ; GregorianCalendar gc = GregorianCalendar.from( firstMomentOfDay ) ; About java.time The java.time framework is built into Java 8 and later. These classes supplant the troublesome old legacy date-time classes such as java.util.Date, Calendar, & SimpleDateFormat. The Joda-Time project, now in maintenance mode, advises migration to the java.time classes. To learn more, see the Oracle Tutorial. And search Stack Overflow for many examples and explanations. Specification is JSR 310. You may exchange java.time objects directly with your database. Use a JDBC driver compliant with JDBC 4.2 or later. No need for strings, no need for java.sql.* classes. Where to obtain the java.time classes? Java SE 8, Java SE 9, Java SE 10, Java SE 11, and later - Part of the standard Java API with a bundled implementation. Java 9 adds some minor features and fixes. Java SE 6 and Java SE 7 Most of the java.time functionality is back-ported to Java 6 & 7 in ThreeTen-Backport. Android Later versions of Android bundle implementations of the java.time classes. For earlier Android (<26), the ThreeTenABP project adapts ThreeTen-Backport (mentioned above). See How to use ThreeTenABP…. The ThreeTen-Extra project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as Interval, YearWeek, YearQuarter, and more.
gRPC
52,645,487
24
I want to use gRPC with .NET in an asp.net core web application. How do I generate the necessary .proto file from an existing C# class and model objects? I don't want to re-write a .proto file that mirrors the existing code, I want the .proto file to be auto-generated from the class and model objects. I call this method to register my service class. builder.MapGrpcService<MyGrpcService>(); public class MyGrpcService { public Task<string> ServiceMethod(ModelObject model, ServerCallContext context) { return Task.FromResult("It Worked"); } } ModelObject has [DataContract] and [DataMember] with order attributes. Is this possible? Every example I see online starts with a .proto file. I've already defined my desired service methods in the MyGrpcService class. But maybe this is just backwards to what is the standard way of doing things... Something like the old .NET remoting would be ideal where you can just ask for an interface from a remote end point and it magically uses gRPC to communicate back and forth, but maybe that is too simplistic a view.
You can use Marc Gravell’s protobuf-net.Grpc for this. Having a code-first experience when building gRPC services is the exact use case why he started working on it. It builds on top of protobuf-net which already adds serialization capabilities between C# types and protobuf. Check out the documentation to see how to get started using the library, or even watch Marc present this topic in one of the following recordings of his talk “Talking Between Services with gRPC and Other Tricks”: Mark Gravell at .NET Oxford in September 2019 Marc Gravell at .NET Core Summer Event in June 2019 I think he actually updated the one in September for the release bits of .NET Core 3.0, so that would probably be the more updated version. There are also a few code samples to see how this looks like when you set it up.
gRPC
58,768,379
24
When you compile Xcode for Mac app or other iOS, you may see below error Signing for "gRPC-C++-gRPCCertificates-Cpp" requires a development team. Select a development team in the Signing & Capabilities editor. My Xcode version: 11.2.1 Mac OS: 10.15.1
It is easy to fix, follow my steps: In Xcode, Choose Pods on your left Go to Signing & Capabilities, choose gRPC-C++-gRPCCertificates-Cpp Choose Team Restart Xcode or clean Xcode with short cut: Command + Shift + k
gRPC
59,062,663
24
I read the gRPC Core concepts, architecture and lifecycle, but it doesn't go into the depth I like to see. There is the RPC call, gRPC channel, gRPC connection (not described in the article) and HTTP/2 connection (not described in the article). I'm interested in knowing how these come together. For example, what happens to the channel when a RPC throws an exception? What happens to the gRPC connection when the channel is closed? When is the channel closed? When is the gRPC connection closed? Heart beats? What if the deadline is exceeded? Can anyone answer these questions, or point me to resources that can?
The connection is not a gRPC concept. It is not part of the normal API and is an implementation detail. This should be seen as fairly normal, like HTTP libraries providing details about HTTP exchanges but not exposing connections. It is best to view RPCs and connections as two mostly-separate systems. The only real guarantee is that "connections are managed by channels," for varying definitions of "managed." You must shut down channels when no longer used if you want connections and other resources to be freed. Other details are either an implementation detail or an advanced API detail. There is no "gRPC connection." A "gRPC connection" would just be a standard "HTTP/2 connection." Except that is even an implementation detail of the transport in many gRPC implementations. That allows having alternative "connection" types like "inprocess" or QUIC (via Cronet, where there is not a classic "connection" at all). It is the channel's job to hold all the connections and reconnect as necessary. It delegates part of that responsibility to load balancers and the load balancing APIs do have a concept of connections (subchannels). By not exposing connections to the application, load balancers have a lot of freedom to operate. I'll note that gRPC C-core based implementations share connections across channels. What happens to the channel when a RPC throws an exception? The channel and connection is not impacted by a failed RPC. Note that connection-level failures typically cause RPCs to fail. But things like retries could allow the RPC to be re-sent on a new connection. What happens to the gRPC connection when the channel is closed? The connections are closed, eventually. Channel shutdown isn't instantaneous because existing RPCs can continue, and connection shutdown isn't instantaneous as well. But once all RPCs complete the connections are closed. Although C-core won't shut down a connection until no channels are using it. When is the channel closed? Only when the user closes it. When is the gRPC connection closed? Lots of times. The client may close it when no longer needed. For example, let's say the server IP address changes and the client need to connect to 1.1.1.2 instead of 1.1.1.1. A new connection will be created and new RPCs will go to the new IP address. The client may also close connections it thinks are dead (e.g., via keepalive timeouts). Servers have a lot of say of when to close connections. They may close them simply because they are old, or because they have been idle, or because the server is overloaded. But those are simply use-cases; the server can shut down a connection at-will. What if the deadline is exceeded? Deadline only applies to RPCs and doesn't impact the channel or a connection.
gRPC
63,749,113
24
Upon discovering gRPC, I stumbled across this blog post Why isn’t everyone already using gRPC in their SPAs? Traditionally it’s not been possible to use gRPC from browser-based applications, because gRPC requires HTTP/2, and browsers don’t expose any APIs that let JS/WASM code control HTTP/2 requests directly. But there is a solution! gRPC-Web is an extension to gRPC which makes it compatible with browser-based code (technically, it’s a way of doing gRPC over HTTP/1.1 requests). gRPC-Web hasn’t become prevalent yet because not many server or client frameworks have offered support for it… until now. ASP.NET Core has offered great gRPC support since the 3.0 release. And now, building on this, we’re about to ship preview support for gRPC-Web on both server and client. If you want to dig into the details, here’s the excellent pull request from James Newton-King where it’s all implemented. There is some good information here, but the post is around a year old at this point. There are also some major pushes from Microsoft with .NET and Blazor technology recently. It looks like grpc-web is pretty well maintained and always adding a lot of language support, so that's something to keep an eye on... but as I understand it grpc-web is still built to operate over HTTP1.1? For me, another question that still remains why HTTP2 is not supported through browser APIs, to which I cannot find any documentation on. I would love to start using gRPC, but am also concerned about the cons that might come with it. Thank you for any explanations to my lack of understanding. Note there is a slightly related question on SO about this here, to which the answers were not totally comprehensive and older.
I have used grpc in my projects and understand your questions about it. The first two questions can be answered via a quote from grpc.io followed by some elaboration. - For me, another question that still remains why HTTP2 is not supported through browser APIs, to which I cannot find any documentation on. - It looks like grpc-web is pretty well maintained and always adding a lot of language support, so that's something to keep an eye on... but as I understand it grpc-web is still built to operate over HTTP1.1? It is currently impossible to implement the HTTP/2 gRPC spec3 in the browser, as there is simply no browser API with enough fine-grained control over the requests. For example: there is no way to force the use of HTTP/2, and even if there was, raw HTTP/2 frames are inaccessible in browsers. The gRPC-Web spec starts from the point of view of the HTTP/2 spec, and then defines the differences. quote reference - I would love to start using gRPC, but am also concerned about the cons that might come with it. I puplished an story about gRpc. You should read. This can be helpful to understand gRPC. I also want to talk about this topic. Why do you want to use gRPC? Is this about the speed of Http2 and gRPC? Is Http1.1 is old technology? Today, REST protocols work on Http1.1. If they start to use Http2.0, there won't be any changes to these interfaces. Also, REST is faster than gRPC if you don't work with streaming. gRPC has a better advantage when it comes to speed. Below, I linked the supported types of RPC from GRPC-WEB
gRPC
65,823,598
24
I don't like tools that do many things at once. So GRPC seems to me overhead, it's like kubernetes. GRPC is the tool that combines actually two things: extended Protobuf (Service support) and HTTP2. I read a lot of articles saying that using GRPC is awesome for performance. And there are two reasons protobuf is used, it's smaller than json or xml. GRPC uses HTTP2 for transport protocol Here is main part: protobuf and HTTP2 are independent projects, tools, whatever. With that understanding i can say that GRPC is nothing but combining several different tools, like kubernetes combines docker and orchestration tools. So my questions is: What's actual advantages of using GRPC vs HTTP2 with any payload (CSV, XML, JSON, etc). Let's skip part about serialization because as i mentioned protobuf is independent library from grpc
As you pointed out, gRPC and Protobuf are often conflated. While, in the vast majority of cases, gRPC will be using protobuf as an IDL and HTTP/2 as the transport, this is not always the case. So then, what value does gRPC provide on its own? For starters, it provides battle-tested implementations of each of those transports, along with first class support for the protobuf IDL. Integrating these things is not trivial. gRPC packages all of them into one nice pluggable box so you don't have to do the legwork. It also provides you with functionality that HTTP/2 on its own does not. Pluggable authorization/authentication, distributed tracing instrumentation, debugging utilities, look-aside load balancing (including upcoming support for the xDS protocol), and more are provided.
gRPC
58,767,467
23
I'm using gRPC with Python as client/server inside kubernetes pods... I would like to be able to launch multiple pods of the same type (gRPC servers) and let the client connect to them (randomly). I dispatched 10 pods of the server and setup a 'service' to target them. Then, in the client, I connected to the DNS name of the service - meaning kubernetes should do the load-balancing and direct me to a random server pod. In reality, the client calls the gRPC functions (which works well) but when I look at the logs I see that all calls going to the same server pod. I presume the client is doing some kind of DNS caching which leads to all calls being sent to the same server. Is this the case? Is there anyway to disable it and set the same stub client to make a "new" call and fetch a new ip by DNS with each call? I am aware of the overhead I might cause if it will query the DNS server each time but distributing the load is much more important for me at the moment.
Let me take the opportunity to answer by describing how things are supposed to work. The way client-side LB works in the gRPC C core (the foundation for all but the Java and Go flavors or gRPC) is as follows (the authoritative doc can be found here): Client-side LB is kept simple and "dumb" on purpose. The way we've chosen to implement complex LB policies is through an external LB server (as described in the aforementioned doc). You aren't concerned with this scenario. Instead, you are simply creating a channel, which will use the (default) pick-first LB policy. The input to an LB policy is a list of resolved addresses. When using DNS, if foo.com resolves to [10.0.0.1, 10.0.0.2, 10.0.0.3, 10.0.0.4], the policy will try to establish a connection to all of them. The first one to successfully connect will become the chosen one until it disconnects. Thus the name "pick-first". A longer name could have been "pick first and stick with it for as long as possible", but that made for a very long file name :). If/when the picked one gets disconnected, the pick-first policy will move over to returning the next successfully connected address (internally referred to as a "connected subchannel"), if any. Once again, it'll continue to choose this connected subchannel for as long as it stays connected. If all of them fail, the call would fail. The problem here is that DNS resolution, being intrinsically pull based, is only triggered 1) at channel creation and 2) upon disconnection of the chosen connected subchannel. As of right now, a hacky solution would be to create a new channel for every request (very inefficient, but it'd do the trick given your setup). Given changes coming in Q1 2017 (see https://github.com/grpc/grpc/issues/7818) will allow clients to choose a different LB policy, namely Round Robin. In addition, we may look into introducing a "randomize" bit to that client config, which would shuffle the addresses prior to doing Round-Robin over them, effectively achieving what you intend.
gRPC
39,643,841
22
I'm using grpc golang to communicate between client and server application. Below is the code for protoc buffer. syntax = "proto3"; package Trail; service TrailFunc { rpc HelloWorld (Request) returns (Reply) {} } // The request message containing the user's name. message Request { map<string,string> inputVar = 1; } // The response message containing the greetings message Reply { string outputVar = 1; } I need to create a field inputVar of type map[string]interface{} inside message data structure instead of map[string]string. How can I achieve it? Thanks in advance.
proto3 has type Any import "google/protobuf/any.proto"; message ErrorStatus { string message = 1; repeated google.protobuf.Any details = 2; } but if you look at its implementation, it is simply as message Any { string type_url = 1; bytes value = 2; } You have to define such a message yourself by possibly using reflection and an intermediate type. See example application https://github.com/golang/protobuf/issues/60
gRPC
40,259,551
22
Given the following gRPC server side code: import ( "google.golang.org/grpc/codes" "google.golang.org/grpc/status" .... ) .... func (s *Router) Assign(ctx context.Context, req *api.Request(*api.Response, error) { return nil, status.Errorf(codes.PermissionDenied, } .... What is the recommended technique for asserting client side that the error is of code = codes.PermissionDenied ?
Let's say your server returns codes.PermissionDenined like this ... return nil, status.Error(codes.PermissionDenied, "PERMISSION_DENIED_TEXT") If your client is Golang as well can also use the status library function FromError to parse the error. I use a switch to determine the error code returned like so // client assignvar, err := s.MyFunctionCall(ctx, ...) if err != nil { if e, ok := status.FromError(err); ok { switch e.Code() { case codes.PermissionDenied: fmt.Println(e.Message()) // this will print PERMISSION_DENIED_TEST case codes.Internal: fmt.Println("Has Internal Error") case codes.Aborted: fmt.Println("gRPC Aborted the call") default: fmt.Println(e.Code(), e.Message()) } } else { fmt.Printf("not able to parse error returned %v", err) } }
gRPC
52,969,205
22
I'm using grpc with protobuf lite in android implementation. but protobuf lite doesn't have google time stamp, and my protos has import "google/protobuf/timestamp.proto". so i added implementation 'com.google.protobuf:protobuf-java:3.7.1' to gradle that contains google time stamp. but after that code compilaition has errors. such as :Duplicate class com.google.protobuf.AbstractMessageLite found in modules protobuf-java-3.7.1.jar (com.google.protobuf:protobuf-java:3.7.1) and protobuf-lite-3.0.1.jar (com.google.protobuf:protobuf-lite:3.0.1). any idea to fix this would be appreciated. apply plugin: 'com.android.application' apply plugin: 'com.google.protobuf' android { compileSdkVersion 28 buildToolsVersion "29.0.0" defaultConfig { minSdkVersion 21 targetSdkVersion 28 versionCode 1 versionName "1.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" } buildTypes { debug { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } } sourceSets { main { proto { srcDir 'src/main' } java { srcDir 'src/main' } } } } protobuf { protoc { artifact = 'com.google.protobuf:protoc:3.7.1' } plugins { javalite { artifact = "com.google.protobuf:protoc-gen-javalite:3.0.0" } grpc { artifact = 'io.grpc:protoc-gen-grpc-java:1.20.0' // CURRENT_GRPC_VERSION } } generateProtoTasks { all().each { task -> task.plugins { javalite {} grpc { // Options added to --grpc_out option 'lite' } } } } } dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'androidx.appcompat:appcompat:1.0.2' implementation 'androidx.constraintlayout:constraintlayout:1.1.3' implementation 'androidx.legacy:legacy-support-v4:1.0.0' testImplementation 'junit:junit:4.12' androidTestImplementation 'androidx.test:runner:1.2.0' androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0' implementation 'com.google.android.material:material:1.0.0' // You need to build grpc-java to obtain these libraries below. implementation 'io.grpc:grpc-okhttp:1.20.0' implementation 'io.grpc:grpc-protobuf-lite:1.22.1' implementation 'io.grpc:grpc-stub:1.20.0' implementation 'javax.annotation:javax.annotation-api:1.3.2' implementation 'com.google.protobuf:protobuf-java:3.7.1' } given error is: Duplicate class com.google.protobuf.AbstractMessageLite found in modules protobuf-java-3.7.1.jar (com.google.protobuf:protobuf-java:3.7.1) and protobuf-lite-3.0.1.jar (com.google.protobuf:protobuf-lite:3.0.1) Duplicate class com.google.protobuf.AbstractMessageLite$Builder found in modules protobuf-java-3.7.1.jar (com.google.protobuf:protobuf-java:3.7.1) and protobuf-lite-3.0.1.jar (com.google.protobuf:protobuf-lite:3.0.1) Duplicate class com.google.protobuf.AbstractMessageLite$Builder$LimitedInputStream found in modules protobuf-java-3.7.1.jar (com.google.protobuf:protobuf-java:3.7.1) and protobuf-lite-3.0.1.jar (com.google.protobuf:protobuf-lite:3.0.1) Duplicate class com.google.protobuf.AbstractParser found in modules protobuf-java-3.7.1.jar (com.google.protobuf:protobuf-java:3.7.1) and protobuf-lite-3.0.1.jar (com.google.protobuf:protobuf-lite:3.0.1) Duplicate class com.google.protobuf.AbstractProtobufList found in modules protobuf-java-3.7.1.jar (com.google.protobuf:protobuf-java:3.7.1) and protobuf-lite-3.0.1.jar (com.google.protobuf:protobuf-lite:3.0.1) Duplicate class com.google.protobuf.BooleanArrayList found in modules protobuf-java-3.7.1.jar (com.google.protobuf:protobuf-java:3.7.1) and protobuf-lite-3.0.1.jar (com.google.protobuf:protobuf-lite:3.0.1) Duplicate class com.google.protobuf.ByteBufferWriter found in modules protobuf-java-3.7.1.jar (com.google.protobuf:protobuf-java:3.7.1) and protobuf-lite-3.0.1.jar (com.google.protobuf:protobuf-lite:3.0.1) Duplicate class com.google.protobuf.ByteOutput found in modules protobuf-java-3.7.1.jar (com.google.protobuf:protobuf-java:3.7.1) and protobuf-lite-3.0.1.jar (com.google.protobuf:protobuf-lite:3.0.1) Duplicate class com.google.protobuf.ByteString found in modules protobuf-java-3.7.1.jar (com.google.protobuf:protobuf-java:3.7.1) and protobuf-lite-3.0.1.jar (com.google.protobuf:protobuf-lite:3.0.1) Duplicate class com.google.protobuf.ByteString$1 found in modules protobuf-java-3.7.1.jar (com.google.protobuf:protobuf-java:3.7.1) and protobuf-lite-3.0.1.jar (com.google.protobuf:protobuf-lite:3.0.1)
The missing classes is a known issue. Full proto and lite proto can't be mixed; they use different generated code. Do not depend on protobuf-java as an implementation dependency, but as a protobuf dependency which will cause gradle-protobuf-plugin to generate code for the .protos. dependencies { ... protobuf 'com.google.protobuf:protobuf-java:3.7.1' } Note that this solution only really works for an application. If you are a library, it is dangerous because users of your library may then see multiple copied of the generated code for the well-known protos.
gRPC
57,019,439
22
From the introduction on gRPC: In gRPC a client application can directly call methods on a server application on a different machine as if it was a local object, making it easier for you to create distributed applications and services. As in many RPC systems, gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. On the server side, the server implements this interface and runs a gRPC server to handle client calls. On the client side, the client has a stub that provides exactly the same methods as the server. The above paragraph talks about a client and a server, with the former being the one who is invoking methods to the other. What am I wondering is: can the server-end of the connection invoke methods that have been registered on the client?
No, a server cannot invoke calls on the client. gRPC works with HTTP, and HTTP has not had such semantics in the past. There has been discussion as to various ways to achieve such a feature, but I'm unaware of any work having started or general agreement on a design. gRPC does support bidirectional streaming, which may get you some of what you need. With bidirectional streaming the client can respond to messages from server, but the client still calls the server and only one type of message can be sent for that call.
gRPC
30,008,476
21
I have a gRPC server that hosts two asynchronous services ("Master" and "Worker"), and I would like to implement graceful shutdown for the server. Each service has its own grpc::CompletionQueue. There appear to be two Shutdown() methods that might be relevant: grpc::CompletionQueue::Shutdown() and grpc::Server::Shutdown(), but it's not clear from the documentation which ones should be used. What is a good pattern for shutting down an asynchronous service?
TL;DR: You must call both grpc::Server::Shutdown() and grpc::CompletionQueue::Shutdown() (for each completion queue used in the service) to shut down cleanly. If you call cq_->Shutdown(), the only observable effect is that subsequent calls to Service::AsyncService::RequestFoo() (the generated method for the corresponding Foo RPC) fail with an assertion. From reading the documentation of the corresponding C API method (grpc_completion_queue_shutdown()), it appears that it is illegal to add new work to the queue—i.e. by calling RequestFoo()—so I added an is_shutdown_ member to my service wrapper classes (protected by a mutex) so that no enqueue attempts are made after cq_->Shutdown() is called. However, after doing this, the completion queue blocks indefinitely in cq_->Next(). None of the enqueued tags complete (with an error or otherwise). If instead you call server_->Shutdown(), all of the enqueued tags complete immediately (with ok == false). However, the completion queue continues to block indefinitely in cq_->Next(). Calling both cq_->Shutdown() (for each defined completion queue) and server_->Shutdown() results in a clean shutdown. One caveat: if you use grpc::ServerContext::AsyncNotifyWhenDone() to register a tag for call cancellation, these will not be returned by cq_->Next() if the server shuts down before the initial request is received for that call. You will need to be cautious with the memory management of the corresponding tag structure, if you want to avoid memory leaks.
gRPC
35,708,348
21
I'd like to know how to add metadata to a nodejs grpc function call. I can use channel credentials when making the client with var client = new proto.Document('some.address:8000', grpc.credentials.createInsecure() ) Which are send when using client.Send(doc, callback), but the go grpc server looks in the call metadata for identification information which I have to set. I tried using grpc.credentials.combineChannelCredentials with the insecure connection and a grpc.Metadata instance but I can't find the right way to do it. The error I run into is TypeError: compose's first argument must be a CallCredentials object. I tried to follow it down but it goes into c code which loses me, I can't see what javascript type I have to give to comebineChannelCredentials to achieve what I'm looking for and the docs are a little sparse on how to achieve this.
You can pass metadata directly as an optional argument to a method call. So, for example, you could do this: var meta = new grpc.Metadata(); meta.add('key', 'value'); client.send(doc, meta, callback);
gRPC
37,526,077
21
I am having trouble finding the source of this error. I implemented a simple service using protobuf: syntax = "proto3"; package tourism; service RemoteService { rpc Login(LoginUserDTO) returns (Response) {} } message AgencyDTO{ int32 id=1; string name=2; string email=3; string password=4; } message LoginUserDTO{ string password=1; string email=2; } message SearchAttractionsDTO{ string name=1; int32 start_hour=2; int32 start_minute=3; int32 stop_hour=4; int32 stop_minute=5; AgencyDTO loggedUser=6; } message AttractionDTO{ int32 id=1; string name=2; string agency=3; int32 hour=4; int32 minute=5; int32 seats=6; int32 price=7; } message ReservationDTO{ int32 id=1; string first_name=2; string last_name=3; string phone=4; int32 seats=5; AttractionDTO attraction=6; AgencyDTO agency=7; } message Response{ enum ResponseType{ OK=0; NOT_LOGGED_ID=1; SERVER_ERROR=2; VALIDATOR_ERROR=3; } ResponseType type=1; AgencyDTO user=2; string message=3; } When using a java client everything works fine, the server receives the request and responds appropriately. When using C# with the same .proto file for generating sources at the client.Login() I get the following errror: Grpc.Core.RpcException Status(StatusCode=Unimplemented, Detail="Method tourism.RemoteService/Login is unimplemented"). The server receives the request but does not have time to respond and throws: INFO: Request from [email protected] May 22, 2017 12:28:58 AM io.grpc.internal.SerializingExecutor run SEVERE: Exception while executing runnable io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$2@4be43082 java.lang.IllegalStateException: call is closed at com.google.common.base.Preconditions.checkState(Preconditions.java:174) at io.grpc.internal.ServerCallImpl.sendHeaders(ServerCallImpl.java:103) at io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl.onNext(ServerCalls.java:282) at ServiceImp.login(ServiceImp.java:20) at tourism.RemoteServiceGrpc$MethodHandlers.invoke(RemoteServiceGrpc.java:187) at io.grpc.stub.ServerCalls$1$1.onHalfClose(ServerCalls.java:148) at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:262) at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$2.runInContext(ServerImpl.java:572) at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:52) at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:117) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Java server: import io.grpc.Server; import io.grpc.ServerBuilder; import io.grpc.stub.StreamObserver; import tourism.RemoteServiceGrpc; import tourism.Service; import java.io.IOException; import java.util.logging.Level; import java.util.logging.Logger; /** * Created by Andu on 21/05/2017. */ public class ServerGrpc { Logger logger= Logger.getLogger(ServerGrpc.class.getName()); private final Server server; private final int port; public ServerGrpc(int p){ port=p; server= ServerBuilder.forPort(port).addService(new ServiceImp()).build(); } public void start() throws IOException { server.start(); logger.info("Server started, listening on " + port); Runtime.getRuntime().addShutdownHook(new Thread() { @Override public void run() { // Use stderr here since the logger may has been reset by its JVM shutdown hook. System.err.println("*** shutting down gRPC server since JVM is shutting down"); ServerGrpc.this.stop(); System.err.println("*** server shut down"); } }); } public void stop() { if (server != null) { server.shutdown(); } } void blockUntilShutdown() throws InterruptedException { if (server != null) { server.awaitTermination(); } } private class ServiceImp extends RemoteServiceGrpc.RemoteServiceImplBase { Logger log=Logger.getLogger(ServiceImp.class.getName()); @Override public void login(Service.LoginUserDTO request, StreamObserver<Service.Response> responseStreamObserver){ super.login(request,responseStreamObserver); log.log(Level.INFO,"Request from "+request.getEmail()); Service.Response response= Service.Response.newBuilder().setMessage("Hello "+request.getEmail()+", I know your password: "+request.getPassword()).build(); responseStreamObserver.onNext(response); responseStreamObserver.onCompleted(); } } } C# Client: namespace testGrpc2 { class MainClass { public static void Main(string[] args) { var channel = new Channel("127.0.0.1:61666",ChannelCredentials.Insecure); var client = new RemoteService.RemoteServiceClient(channel); Response response=client.Login(new LoginUserDTO{Email="[email protected]",Password="notmypassword"}); Console.WriteLine(response); Console.ReadKey(); } } }
For me it was that I forget adding endpoint of gRpc service in startup class. public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } app.UseRouting(); app.UseEndpoints(endpoints => { endpoints.MapGrpcService<GreeterService>(); //Add your endpoint here like this endpoints.MapGrpcService<YourProtoService>(); });
gRPC
44,102,096
21
I have been studying about Apache Thrift, ProtoBuf and Flatbuffers. I found the tutorial to use gRPC with protobuf at link but I am not finding any documentation to use gRPC with Flatbuffers. Can some one point me to the relevant documentation? I checked it on Google as well as on Stackoverflow. Any help would be appreciated.
Since this question was first asked, progress has been made in a) making GRPC codegen independent of protobuf (see https://github.com/grpc/grpc/pull/6130) and then to integrate that codegenerator in the flatbuffers compiler flatc: https://github.com/google/flatbuffers/commit/48f37f9e0a04f2b60046dda7fef20a8b0ebc1a70 This is a a very basic first implementation, feedback welcome.
gRPC
34,170,945
20
gRPC is a "general RPC framework" which uses ProtoBuffer to serialize and deserialize while the net/rpc package seems could do "nearly" the same thing with encoding/gob and both are under the umbrella of Google. So what's the difference between them? What pros and cons dose choosing one of them have?
Well, you have said it yourself. gRPC is a framework that uses RPC to communicate. RPC is not Protobuf but instead Protobuf can use RPC and gRPC is actually Protobuf over RPC. You don't need to use Protobuf to create RPC services within your app. This is a good idea if you are doing libraries/apps from small to medium size. Also you don't need to learn the syntax of Protobuf to create your own services. But, Protobuf is much faster than REST. It is a much more convenient way to communicate with the downside of the learning curve of the Protobuf syntax. Also, you can use Protobuf to generate the codebase in more languages than simply Go. So if you have some kind of service in Java, you can use Protobuf to generate RPC calls between them easily while if you use the net/rpc package you'll have to implement them twice (once in Go and once in Java) In general, I will use Protobuf to nearly all. This gives you confidence to use it at more large scale or complex projects.
gRPC
39,034,114
20
I'm occasionally getting cancellation errors when calling gRPC methods. Here's my client-side code (Using grpc-java 1.22.0 library): public class MyClient { private static final Logger logger = LoggerFactory.getLogger(MyClient.class); private ManagedChannel channel; private FooGrpc.FooStub fooStub; private final StreamObserver<Empty> responseObserver = new StreamObserver<>() { @Override public void onNext(Empty value) { } @Override public void onError(Throwable t) { logger.error("Error: ", t); } @Override public void onCompleted() { } }; public MyClient() { this.channel = NettyChannelBuilder .forAddress(host, port) .sslContext(GrpcSslContexts.forClient().trustManager(certStream).build()) .build(); var pool = Executors.newCachedThreadPool( new ThreadFactoryBuilder().setNameFormat("foo-pool-%d").build()); this.fooStub = FooGrpc.newStub(channel) .withExecutor(pool); } public void callFoo() { fooStub.withDeadlineAfter(500L, TimeUnit.MILLISECONDS) .myMethod(whatever, responseObserver); } } When I invoke callFoo() method, it usually works. Client sends a message and server receives it without problem. But this call occasionally gives me an error: io.grpc.StatusRuntimeException: CANCELLED: io.grpc.Context was cancelled without error at io.grpc.Status.asRuntimeException(Status.java:533) ~[handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:442) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:700) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:399) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:507) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:66) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:627) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$700(ClientCallImpl.java:515) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:686) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:675) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) [handler-0.0.1-SNAPSHOT.jar:?] at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) [handler-0.0.1-SNAPSHOT.jar:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:834) [?:?] The weird thing is that even though the call gives an error at client side, the server does receive the request, mostly. But sometimes the server misses it. It is not even DEADLINE_EXCEEDED exception, it just throws CANCELLED: io.grpc.Context was cancelled without error. No other description is provided, So I cannot figure out why this is happening. To summarize: gRPC call from client randomly gives CANCELLED error. When the error happens, the server sometimes gets the call, but sometimes not.
grpc-java supports automatic deadline and cancellation propagation. When an inbound RPC causes outbound RPCs, those outbound RPCs inherit the inbound RPC's deadline. Also, if the inbound RPC is cancelled the outbound RPCs will be cancelled. This is implemented via io.grpc.Context. If you do an outbound RPC that you want to live longer than the inbound RPC, you should use Context.fork(). public void myRpcMethod(Request req, StreamObserver<Response> observer) { // ctx has all the values as the current context, but // won't be cancelled Context ctx = Context.current().fork(); // Set ctx as the current context within the Runnable ctx.run(() -> { // Can start asynchronous work here that will not // be cancelled when myRpcMethod returns }); observer.onNext(generateReply()); observer.onCompleted(); }
gRPC
57,110,811
20
Using aspnetcore 3.1 and the Grpc.AspNetCore nuget package, I have managed to get gRPC services running successfully alongside standard asp.net controllers as described in this tutorial. However I would like to bind the gRPC services to a specific port (e.g. 5001), preferably through configuration instead of code if possible. This is because I would like to limit how my gRPC services are exposed. The closest I have come has been using RequireHost when mapping the endpoints: // Startup.cs public void Configure(IApplicationBuilder app) { // ... app.useEndpoints(endpoints => { endpoints.MapGrpcService<MyService>() .RequireHost("0.0.0.0:5001"); }); } This seems to do what I want but I can't find any documentation about it, and it requires configuration in code per service. Perhaps there is a better way?
In the ASP.NET Core 6.0 ports can be changed in the Properties > launchSettings.json file. But this file is considered only if you run the server from the Visual Studio or VS Code. I was trying to run the server directly using the .exe file for testing. The server was running with the default ports: "http://localhost:5000;https://localhost:5001". Finally, I changed it from the appsettings.json for the .exe file: "AllowedHosts": "*", "Kestrel": { "Endpoints": { "Http": { "Url": "https://localhost:7005", "Protocols": "Http1AndHttp2" }, "gRPC": { "Url": "http://localhost:5005", "Protocols": "Http2" } }
gRPC
63,827,667
20
New to gRPC and couldn't really find any example on how to enable SSL on the server side. I generated a key pair using openssl but it complains that the private key is invalid. D0608 16:18:31.390303 Grpc.Core.Internal.UnmanagedLibrary Attempting to load native library "...\grpc_csharp_ext.dll" D0608 16:18:31.424331 Grpc.Core.Internal.NativeExtension gRPC native library loaded successfully. E0608 16:18:43.307324 0 ..\src\core\lib\tsi\ssl_transport_security.c:644: Invalid private key. E0608 16:18:43.307824 0 ..\src\core\lib\security\security_connector.c:821: Handshaker factory creation failed with TSI_INVALID_ARGUMENT. E0608 16:18:43.307824 0 ..\src\core\ext\transport\chttp2\server\secure\server_secure_chttp2.c:188: Unable to create secure server with credentials of type Ssl. Here's my code var keypair = new KeyCertificatePair( File.ReadAllText(@"root-ca.pem"), File.ReadAllText(@"ssl-private.key")); SslServerCredentials creds = new SslServerCredentials(new List<KeyCertificatePair>() {keypair}); Server server = new Server { Services = { GrpcTest.BindService(new GrpcTestImpl()) }, Ports = { new ServerPort("127.0.0.1", Port, creds) } };
Here's what I did. Using OpenSSL, generate certificates with the following: @echo off set OPENSSL_CONF=c:\OpenSSL-Win64\bin\openssl.cfg echo Generate CA key: openssl genrsa -passout pass:1111 -des3 -out ca.key 4096 echo Generate CA certificate: openssl req -passin pass:1111 -new -x509 -days 365 -key ca.key -out ca.crt -subj "/C=US/ST=CA/L=Cupertino/O=YourCompany/OU=YourApp/CN=MyRootCA" echo Generate server key: openssl genrsa -passout pass:1111 -des3 -out server.key 4096 echo Generate server signing request: openssl req -passin pass:1111 -new -key server.key -out server.csr -subj "/C=US/ST=CA/L=Cupertino/O=YourCompany/OU=YourApp/CN=%COMPUTERNAME%" echo Self-sign server certificate: openssl x509 -req -passin pass:1111 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt echo Remove passphrase from server key: openssl rsa -passin pass:1111 -in server.key -out server.key echo Generate client key openssl genrsa -passout pass:1111 -des3 -out client.key 4096 echo Generate client signing request: openssl req -passin pass:1111 -new -key client.key -out client.csr -subj "/C=US/ST=CA/L=Cupertino/O=YourCompany/OU=YourApp/CN=%CLIENT-COMPUTERNAME%" echo Self-sign client certificate: openssl x509 -passin pass:1111 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt echo Remove passphrase from client key: openssl rsa -passin pass:1111 -in client.key -out client.key Change password 1111 to anything you like Server: var cacert = File.ReadAllText(@"ca.crt"); var servercert = File.ReadAllText(@"server.crt"); var serverkey = File.ReadAllText(@"server.key"); var keypair = new KeyCertificatePair(servercert, serverkey); var sslCredentials = new SslServerCredentials(new List<KeyCertificatePair>() { keypair }, cacert, false); var server = new Server { Services = { GrpcTest.BindService(new GrpcTestImpl(writeToDisk)) }, Ports = { new ServerPort("0.0.0.0", 555, sslCredentials) } }; server.Start(); Client: var cacert = File.ReadAllText(@"ca.crt"); var clientcert = File.ReadAllText(@"client.crt"); var clientkey = File.ReadAllText(@"client.key"); var ssl = new SslCredentials(cacert, new KeyCertificatePair(clientcert, clientkey)); channel = new Channel("localhost", 555, ssl); client = new GrpcTest.GrpcTestClient(channel); If "localhost" doesn't work, use the host name instead.
gRPC
37,714,558
19
I have created a very simple program which should list the topics available in a Google Cloud project. The code is trivial: using System; using Google.Pubsub.V1; public class Test { static void Main() { var projectId = "(fill in project ID here...)"; var projectName = PublisherClient.FormatProjectName(projectId); var client = PublisherClient.Create(); foreach (var topic in client.ListTopics(projectName)) { Console.WriteLine(topic.Name); } } } When I run this from a "regular" msbuild project targeting .NET 4.5, it works fine. When I try to use dotnet cli (1.0.0-preview2-003121) with the following project.json file: { "buildOptions": { "emitEntryPoint": true }, "dependencies": { "Google.Pubsub.V1": "1.0.0-beta01" }, "frameworks": { "net45": { } } } ... I see an exception: Unhandled Exception: System.IO.FileNotFoundException: Error loading native library. Not found in any of the possible locations c:\[...]\Pubsub.Demo\bin\Debug\net45\win7-x64\nativelibs\windows_x64\grpc_csharp_ext.dll at Grpc.Core.Internal.UnmanagedLibrary.FirstValidLibraryPath(String[] libraryPathAlternatives) at Grpc.Core.Internal.UnmanagedLibrary..ctor(String[] libraryPathAlternatives) at ... I'm not trying to target .NET Core, so shouldn't this be supported?
This is currently a limitation in gRPC 0.15, which Google.Pubsub.V1 uses as its RPC transport. Under msbuild, the build/net45/Grpc.Core.targets file in the Grpc.Core package copies all the native binaries into place. Under DNX, the packages weren't copied and gRPC tries to look for the file in the right place with the local package repository. Under dotnet cli, we need to use the "runtimes" root directory in the package to host the libraries. We've implemented a fix for this in gRPC, but we didn't manage to get it into the beta-01 release. We're hoping to fix it for beta-02. It's possible to work around this by just manually copying the file: mkdir bin\Debug\net45\win7-x64\nativelibs\windows_x64 copy \users\jon\.dnx\packages\Grpc.Core\0.15.0\build\native\bin\windows_x64\grpc_csharp_ext.dll bin\Debug\net45\win7-x64\nativelibs\windows_x64 ... but that's obviously pretty fiddly. I'd suggest just using msbuild until the underlying issue has been fixed.
gRPC
38,349,230
19
Does anyone know where I can find an example of a gRPC protobuf file that imports from a different file and uses a protobuf message in a return? I can't find any at all. I have a file... syntax = "proto3"; package a1; import "a.proto"; service mainservice { rpc DoSomething(...) returns (a.SomeResponse) {} } a.proto is also in the same directory and also compiles by itself. The error messages I'm getting are: "a.SomeResponse" is not defined. mainfile.proto: warning: Import a.proto but not used.
Found the answer... need to make sure the package name of a.proto is used when specifying the object imported (eg: a_package_name.SomeResponse). Example: base.proto syntax = "proto3"; option csharp_namespace = "Api.Protos"; package base; message BaseResponse { bool IsSuccess = 1; string Message = 2; } user.proto syntax = "proto3"; import "Protos/base.proto"; option csharp_namespace = "Api.Protos"; package user; message UserCreateResponse { base.BaseResponse response = 1; }
gRPC
41,150,779
19
I am trying to use Google Cloud Endpoints to make a gRPC based api that can transcode incoming REST requests. I am following their example code but I can not any documentation on how to properly import and compile with the annotation.proto or the empty.proto. Thank you!
I didn't understand that this was part of grpc-gateway. By following the docs I ran protoc -I/usr/local/include -I. -I$GOPATH/src -I$GOPATH/src/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis --go_out=plugins=grpc:. *.proto and compiled successfully.
gRPC
43,313,186
19
I am trying to transfer large amounts of structured data from Java to Python. That includes many objects that are related to each other in some form or another. When I receive them in my Python code, it's quiet ugly to work with the types that are provided by protobuf. My VIM IDE crashed when trying to use autocomplete on the types, PyCharm doesn't complete anything and generally it just seems absurd that they don't provide some clean class definition for the different types. Is there a way to get IDE support while working with protobuf messages in python? I'm looking at 20+ methods handling complex messages and without IDE support I might as well code with notepad. I understand that protobuf is using metaclasses (although I don't know why they do that). Maybe there is a way to generate python class files from that data or maybe there is something similar to typescript typing files. Did I maybe misuse protobuf? I believed I would describe my domain model in a way that may be used across languages. In Java I am happy with the generated classes and I can use them easily. Should I maybe have used something like swagger.io instead?
If you are using a recent Python (3.7+) then https://github.com/danielgtaylor/python-betterproto (disclaimer: I'm the author) will generate very clean Python dataclasses as output which will give you proper typing and IDE completion support. For example, this input: syntax = "proto3"; package hello; // Greeting represents a message you can tell a user. message Greeting { string message = 1; } Would generate the following output: # Generated by the protocol buffer compiler. DO NOT EDIT! # sources: hello.proto # plugin: python-betterproto from dataclasses import dataclass import betterproto @dataclass class Hello(betterproto.Message): """Greeting represents a message you can tell a user.""" message: str = betterproto.string_field(1) In general the output of this plugin mimics the *.proto input and is very easy to read if you happen to jump to definition on a message or field. It's been a huge improvement for me personally over the official Google compiler plugin, and supports async gRPC out of the box as well.
gRPC
49,755,565
19
I want to mock my grpc client to ensure that it is resilient to failure by throwing an new StatusRuntimeException(Status.UNAVAILABLE) (This is the exception that is thrown when java.net.ConnectException: Connection refused is thrown to the grpc client). However, the generated class is final, so mock will not work. How do I get BlahServiceBlockingStub to throw new StatusRuntimeException(Status.UNAVAILABLE) without having to refactor my code to create a wrapper class around BlahServiceBlockingStub? This is what I have tried (where BlahServiceBlockingStub was generated by grpc): @Test public void test() { BlahServiceBlockingStub blahServiceBlockingStub = mock(BlahServiceBlockingStub.class); when(blahServiceBlockingStub.blah(any())).thenThrow(new StatusRuntimeException(Status.UNAVAILABLE)); blahServiceBlockingStub.blah(null); } Unfortunately I get the below exception as expected: org.mockito.exceptions.base.MockitoException: Cannot mock/spy class BlahServiceGrpc$BlahServiceBlockingStub Mockito cannot mock/spy following: - final classes - anonymous classes - primitive types at MyTestClass.test(MyTestClass.java:655) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) . . . Because I tried mocking the final class generated by grpc: public static final class BlahServiceBlockingStub extends io.grpc.stub.AbstractStub<BlahServiceBlockingStub> { private BlahServiceBlockingStub(io.grpc.Channel channel) { super(channel); }
Do not mock the client stub, or any other final class/method. The gRPC team may go out of their way to break your usage of such mocks, as they are extremely brittle and can produce "impossible" results. Mock the service, not the client stub. When combined with the in-process transport it produces fast, reliable tests. This is the same approach as demonstrated in the grpc-java hello world example. @Rule public final GrpcCleanupRule grpcCleanup = new GrpcCleanupRule(); @Test public void test() { // This can be a mock, but is easier here as a fake implementation BlahServiceImplBase serviceImpl = new BlahServiceImplBase() { @Override public void blah(Request req, StreamObserver<Response> resp) { resp.onError(new StatusRuntimeException(Status.UNAVAILABLE)); } }; // Note that the channel and server can be created in any order grpcCleanup.register(InProcessServerBuilder.forName("mytest") .directExecutor().addService(serviceImpl).build().start()); ManagedChannel chan = grpcCleanup.register( InProcessChannelBuilder.forName("mytest").directExecutor().build(); BlahServiceBlockingStub blahServiceBlockingStub = BlahServiceGrpc.newBlockingStub(); blahServiceBlockingStub.blah(null); } When doing multiple tests, you can hoist the server, channel, and stub creation into fields or @Before, out of the individual tests. When doing that it can be convenient to use MutableHandlerRegistry as a fallbackHandlerRegistry() on the server. That allows you to register services after the server is started. See the route guide example for a fuller example of that approach.
gRPC
59,536,673
19
Following the docs on how to set up a gRPC gateway, I find myself stuck at step four of generating the grpc gateway. Namely, things fall apart when the following line is added: import "google/api/annotations.proto"; The documentation says You will need to provide the required third party protobuf files to the protoc compiler - but not actually how do do so. How do I add google/api/annotations.proto as a dependency?
I solved it one way by adding third party google apis and its content to the root of my project. Feels wrong, but apparently this is encouraged
gRPC
66,168,350
19
I've seen two different ways of declaring an gRPC service using Protobuf v3. Some code has the rpc line end with a semicolon (such as the current proto3 documentation): service SearchService { rpc Search (SearchRequest) returns (SearchResponse); } Other code has the rpc line end with {}: service Greeter { rpc SayHello (HelloRequest) returns (HelloReply) {} } Both of these compile with the protoc v3.0.0-alpha-2 compiler and produce the same (go) code. What is the difference between the two syntaxes?
Nothing, really; they are equivalent. The {} syntax is used when there are options. If you don't specify any options, either syntax works (just like in C!).
gRPC
30,106,667
18