Class KafkaProducer<K,V>
- java.lang.Object
-
- io.vertx.reactivex.kafka.client.producer.KafkaProducer<K,V>
-
- All Implemented Interfaces:
StreamBase
,WriteStream<KafkaProducerRecord<K,V>>
public class KafkaProducer<K,V> extends Object implements WriteStream<KafkaProducerRecord<K,V>>
Vert.x Kafka producer.The
WriteStream.write(T)
provides global control over writing a record.NOTE: This class has been automatically generated from the
original
non RX-ified interface using Vert.x codegen.
-
-
Field Summary
Fields Modifier and Type Field Description static io.vertx.lang.rx.TypeArg<KafkaProducer>
__TYPE_ARG
io.vertx.lang.rx.TypeArg<K>
__typeArg_0
io.vertx.lang.rx.TypeArg<V>
__typeArg_1
-
Constructor Summary
Constructors Constructor Description KafkaProducer(KafkaProducer delegate)
KafkaProducer(Object delegate, io.vertx.lang.rx.TypeArg<K> typeArg_0, io.vertx.lang.rx.TypeArg<V> typeArg_1)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description Future<Void>
abortTransaction()
Aborts the ongoing transaction.Future<Void>
beginTransaction()
Starts a new kafka transaction.Future<Void>
close()
Close the producerFuture<Void>
close(long timeout)
Close the producerFuture<Void>
commitTransaction()
Commits the ongoing transaction.static <K,V>
KafkaProducer<K,V>create(Vertx vertx, Map<String,String> config)
Create a new KafkaProducer instancestatic <K,V>
KafkaProducer<K,V>create(Vertx vertx, Map<String,String> config, Class<K> keyType, Class<V> valueType)
Create a new KafkaProducer instancestatic <K,V>
KafkaProducer<K,V>create(Vertx vertx, org.apache.kafka.clients.producer.Producer<K,V> producer)
Create a new KafkaProducer instance from a native .static <K,V>
KafkaProducer<K,V>create(Vertx vertx, org.apache.kafka.clients.producer.Producer<K,V> producer, KafkaClientOptions options)
Create a new KafkaProducer instance from a native .static <K,V>
KafkaProducer<K,V>createShared(Vertx vertx, String name, KafkaClientOptions options)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samename
static <K,V>
KafkaProducer<K,V>createShared(Vertx vertx, String name, KafkaClientOptions options, Class<K> keyType, Class<V> valueType)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samename
static <K,V>
KafkaProducer<K,V>createShared(Vertx vertx, String name, Map<String,String> config)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samename
static <K,V>
KafkaProducer<K,V>createShared(Vertx vertx, String name, Map<String,String> config, Class<K> keyType, Class<V> valueType)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samename
KafkaProducer<K,V>
drainHandler(Handler<Void> handler)
Set a drain handler on the stream.Future<Void>
end()
Ends the stream.Future<Void>
end(KafkaProducerRecord<K,V> data)
Same asWriteStream.end()
but writes some data to the stream before ending.boolean
equals(Object o)
KafkaProducer<K,V>
exceptionHandler(Handler<Throwable> handler)
Set an exception handler on the write stream.Future<Void>
flush()
Invoking this method makes all buffered records immediately available to writeKafkaProducer
getDelegate()
int
hashCode()
Future<Void>
initTransactions()
Initializes the underlying kafka transactional producer.static <K,V>
KafkaProducer<K,V>newInstance(KafkaProducer arg)
static <K,V>
KafkaProducer<K,V>newInstance(KafkaProducer arg, io.vertx.lang.rx.TypeArg<K> __typeArg_K, io.vertx.lang.rx.TypeArg<V> __typeArg_V)
Future<List<PartitionInfo>>
partitionsFor(String topic)
Get the partition metadata for the give topic.Completable
rxAbortTransaction()
Aborts the ongoing transaction.Completable
rxBeginTransaction()
Starts a new kafka transaction.Completable
rxClose()
Close the producerCompletable
rxClose(long timeout)
Close the producerCompletable
rxCommitTransaction()
Commits the ongoing transaction.Completable
rxEnd()
Ends the stream.Completable
rxEnd(KafkaProducerRecord<K,V> data)
Same asWriteStream.end()
but writes some data to the stream before ending.Completable
rxFlush()
Invoking this method makes all buffered records immediately available to writeCompletable
rxInitTransactions()
Initializes the underlying kafka transactional producer.Single<List<PartitionInfo>>
rxPartitionsFor(String topic)
Get the partition metadata for the give topic.Single<RecordMetadata>
rxSend(KafkaProducerRecord<K,V> record)
Asynchronously write a record to a topicCompletable
rxWrite(KafkaProducerRecord<K,V> data)
Write some data to the stream.Future<RecordMetadata>
send(KafkaProducerRecord<K,V> record)
Asynchronously write a record to a topicKafkaProducer<K,V>
setWriteQueueMaxSize(int i)
Set the maximum size of the write queue tomaxSize
.WriteStreamObserver<KafkaProducerRecord<K,V>>
toObserver()
String
toString()
WriteStreamSubscriber<KafkaProducerRecord<K,V>>
toSubscriber()
Future<Void>
write(KafkaProducerRecord<K,V> data)
Write some data to the stream.boolean
writeQueueFull()
This will returntrue
if there are more bytes in the write queue than the value set usingsetWriteQueueMaxSize(int)
-
-
-
Field Detail
-
__TYPE_ARG
public static final io.vertx.lang.rx.TypeArg<KafkaProducer> __TYPE_ARG
-
__typeArg_0
public final io.vertx.lang.rx.TypeArg<K> __typeArg_0
-
__typeArg_1
public final io.vertx.lang.rx.TypeArg<V> __typeArg_1
-
-
Constructor Detail
-
KafkaProducer
public KafkaProducer(KafkaProducer delegate)
-
-
Method Detail
-
getDelegate
public KafkaProducer getDelegate()
- Specified by:
getDelegate
in interfaceStreamBase
- Specified by:
getDelegate
in interfaceWriteStream<K>
-
toObserver
public WriteStreamObserver<KafkaProducerRecord<K,V>> toObserver()
- Specified by:
toObserver
in interfaceWriteStream<K>
-
toSubscriber
public WriteStreamSubscriber<KafkaProducerRecord<K,V>> toSubscriber()
- Specified by:
toSubscriber
in interfaceWriteStream<K>
-
write
public Future<Void> write(KafkaProducerRecord<K,V> data)
Write some data to the stream.The data is usually put on an internal write queue, and the write actually happens asynchronously. To avoid running out of memory by putting too much on the write queue, check the
WriteStream.writeQueueFull()
method before writing. This is done automatically if using a .When the
data
is moved from the queue to the actual medium, the returned will be completed with the write result, e.g the future is succeeded when a server HTTP response buffer is written to the socket and failed if the remote client has closed the socket while the data was still pending for write.- Specified by:
write
in interfaceWriteStream<K>
- Parameters:
data
- the data to write- Returns:
- a future completed with the write result
-
rxWrite
public Completable rxWrite(KafkaProducerRecord<K,V> data)
Write some data to the stream.The data is usually put on an internal write queue, and the write actually happens asynchronously. To avoid running out of memory by putting too much on the write queue, check the
WriteStream.writeQueueFull()
method before writing. This is done automatically if using a .When the
data
is moved from the queue to the actual medium, the returned will be completed with the write result, e.g the future is succeeded when a server HTTP response buffer is written to the socket and failed if the remote client has closed the socket while the data was still pending for write.- Specified by:
rxWrite
in interfaceWriteStream<K>
- Parameters:
data
- the data to write- Returns:
- a future completed with the write result
-
end
public Future<Void> end()
Ends the stream.Once the stream has ended, it cannot be used any more.
- Specified by:
end
in interfaceWriteStream<K>
- Returns:
- a future completed with the result
-
rxEnd
public Completable rxEnd()
Ends the stream.Once the stream has ended, it cannot be used any more.
- Specified by:
rxEnd
in interfaceWriteStream<K>
- Returns:
- a future completed with the result
-
end
public Future<Void> end(KafkaProducerRecord<K,V> data)
Same asWriteStream.end()
but writes some data to the stream before ending.- Specified by:
end
in interfaceWriteStream<K>
- Parameters:
data
- the data to write- Returns:
- a future completed with the result
-
rxEnd
public Completable rxEnd(KafkaProducerRecord<K,V> data)
Same asWriteStream.end()
but writes some data to the stream before ending.- Specified by:
rxEnd
in interfaceWriteStream<K>
- Parameters:
data
- the data to write- Returns:
- a future completed with the result
-
writeQueueFull
public boolean writeQueueFull()
This will returntrue
if there are more bytes in the write queue than the value set usingsetWriteQueueMaxSize(int)
- Specified by:
writeQueueFull
in interfaceWriteStream<K>
- Returns:
true
if write queue is full
-
createShared
public static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Map<String,String> config)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samename
When
close
has been called for each shared producer the resources will be released. Callingend
closes all shared producers.- Parameters:
vertx
- Vert.x instance to usename
- the producer name to identify itconfig
- Kafka producer configuration- Returns:
- an instance of the KafkaProducer
-
createShared
public static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, KafkaClientOptions options)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samename
When
close
has been called for each shared producer the resources will be released. Callingend
closes all shared producers.- Parameters:
vertx
- Vert.x instance to usename
- the producer name to identify itoptions
- Kafka producer options- Returns:
- an instance of the KafkaProducer
-
createShared
public static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Map<String,String> config, Class<K> keyType, Class<V> valueType)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samename
When
close
has been called for each shared producer the resources will be released. Callingend
closes all shared producers.- Parameters:
vertx
- Vert.x instance to usename
- the producer name to identify itconfig
- Kafka producer configurationkeyType
- class type for the key serializationvalueType
- class type for the value serialization- Returns:
- an instance of the KafkaProducer
-
createShared
public static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, KafkaClientOptions options, Class<K> keyType, Class<V> valueType)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samename
When
close
has been called for each shared producer the resources will be released. Callingend
closes all shared producers.- Parameters:
vertx
- Vert.x instance to usename
- the producer name to identify itoptions
- Kafka producer optionskeyType
- class type for the key serializationvalueType
- class type for the value serialization- Returns:
- an instance of the KafkaProducer
-
create
public static <K,V> KafkaProducer<K,V> create(Vertx vertx, Map<String,String> config)
Create a new KafkaProducer instance- Parameters:
vertx
- Vert.x instance to useconfig
- Kafka producer configuration- Returns:
- an instance of the KafkaProducer
-
create
public static <K,V> KafkaProducer<K,V> create(Vertx vertx, Map<String,String> config, Class<K> keyType, Class<V> valueType)
Create a new KafkaProducer instance- Parameters:
vertx
- Vert.x instance to useconfig
- Kafka producer configurationkeyType
- class type for the key serializationvalueType
- class type for the value serialization- Returns:
- an instance of the KafkaProducer
-
initTransactions
public Future<Void> initTransactions()
Initializes the underlying kafka transactional producer. SeeinitTransactions()
()}- Returns:
- a future notified with the result
-
rxInitTransactions
public Completable rxInitTransactions()
Initializes the underlying kafka transactional producer. SeeinitTransactions()
()}- Returns:
- a future notified with the result
-
beginTransaction
public Future<Void> beginTransaction()
Starts a new kafka transaction. SeebeginTransaction()
- Returns:
- a future notified with the result
-
rxBeginTransaction
public Completable rxBeginTransaction()
Starts a new kafka transaction. SeebeginTransaction()
- Returns:
- a future notified with the result
-
commitTransaction
public Future<Void> commitTransaction()
Commits the ongoing transaction. SeecommitTransaction()
- Returns:
- a future notified with the result
-
rxCommitTransaction
public Completable rxCommitTransaction()
Commits the ongoing transaction. SeecommitTransaction()
- Returns:
- a future notified with the result
-
abortTransaction
public Future<Void> abortTransaction()
Aborts the ongoing transaction. SeeKafkaProducer
- Returns:
- a future notified with the result
-
rxAbortTransaction
public Completable rxAbortTransaction()
Aborts the ongoing transaction. SeeKafkaProducer
- Returns:
- a future notified with the result
-
exceptionHandler
public KafkaProducer<K,V> exceptionHandler(Handler<Throwable> handler)
Description copied from interface:WriteStream
Set an exception handler on the write stream.- Specified by:
exceptionHandler
in interfaceStreamBase
- Specified by:
exceptionHandler
in interfaceWriteStream<K>
- Parameters:
handler
- the exception handler- Returns:
- a reference to this, so the API can be used fluently
-
setWriteQueueMaxSize
public KafkaProducer<K,V> setWriteQueueMaxSize(int i)
Description copied from interface:WriteStream
Set the maximum size of the write queue tomaxSize
. You will still be able to write to the stream even if there is more thanmaxSize
items in the write queue. This is used as an indicator by classes such asPipe
to provide flow control. The value is defined by the implementation of the stream, e.g in bytes for aNetSocket
, etc...- Specified by:
setWriteQueueMaxSize
in interfaceWriteStream<K>
- Parameters:
i
- the max size of the write stream- Returns:
- a reference to this, so the API can be used fluently
-
drainHandler
public KafkaProducer<K,V> drainHandler(Handler<Void> handler)
Description copied from interface:WriteStream
Set a drain handler on the stream. If the write queue is full, then the handler will be called when the write queue is ready to accept buffers again. SeePipe
for an example of this being used.The stream implementation defines when the drain handler, for example it could be when the queue size has been reduced to
maxSize / 2
.- Specified by:
drainHandler
in interfaceWriteStream<K>
- Parameters:
handler
- the handler- Returns:
- a reference to this, so the API can be used fluently
-
send
public Future<RecordMetadata> send(KafkaProducerRecord<K,V> record)
Asynchronously write a record to a topic- Parameters:
record
- record to write- Returns:
- a
Future
completed with the record metadata
-
rxSend
public Single<RecordMetadata> rxSend(KafkaProducerRecord<K,V> record)
Asynchronously write a record to a topic- Parameters:
record
- record to write- Returns:
- a
Future
completed with the record metadata
-
partitionsFor
public Future<List<PartitionInfo>> partitionsFor(String topic)
Get the partition metadata for the give topic.- Parameters:
topic
- topic partition for which getting partitions info- Returns:
- a future notified with the result
-
rxPartitionsFor
public Single<List<PartitionInfo>> rxPartitionsFor(String topic)
Get the partition metadata for the give topic.- Parameters:
topic
- topic partition for which getting partitions info- Returns:
- a future notified with the result
-
flush
public Future<Void> flush()
Invoking this method makes all buffered records immediately available to write- Returns:
- a future notified with the result
-
rxFlush
public Completable rxFlush()
Invoking this method makes all buffered records immediately available to write- Returns:
- a future notified with the result
-
close
public Future<Void> close()
Close the producer- Returns:
- a
Future
completed with the operation result
-
rxClose
public Completable rxClose()
Close the producer- Returns:
- a
Future
completed with the operation result
-
close
public Future<Void> close(long timeout)
Close the producer- Parameters:
timeout
-- Returns:
- a future notified with the result
-
rxClose
public Completable rxClose(long timeout)
Close the producer- Parameters:
timeout
-- Returns:
- a future notified with the result
-
create
public static <K,V> KafkaProducer<K,V> create(Vertx vertx, org.apache.kafka.clients.producer.Producer<K,V> producer)
Create a new KafkaProducer instance from a native .- Parameters:
vertx
- Vert.x instance to useproducer
- the Kafka producer to wrap- Returns:
- an instance of the KafkaProducer
-
create
public static <K,V> KafkaProducer<K,V> create(Vertx vertx, org.apache.kafka.clients.producer.Producer<K,V> producer, KafkaClientOptions options)
Create a new KafkaProducer instance from a native .- Parameters:
vertx
- Vert.x instance to useproducer
- the Kafka producer to wrapoptions
- options used only for tracing settings- Returns:
- an instance of the KafkaProducer
-
newInstance
public static <K,V> KafkaProducer<K,V> newInstance(KafkaProducer arg)
-
newInstance
public static <K,V> KafkaProducer<K,V> newInstance(KafkaProducer arg, io.vertx.lang.rx.TypeArg<K> __typeArg_K, io.vertx.lang.rx.TypeArg<V> __typeArg_V)
-
-