public class KafkaConsumer<K,V> extends Object implements ReadStream<KafkaConsumerRecord<K,V>>
You receive Kafka records by providing a handler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecord<K, V>>)
. As messages arrive the handler
will be called with the records.
The pause()
and resume()
provides global control over reading the records from the consumer.
The pause()
and resume()
provides finer grained control over reading records
for specific Topic/Partition, these are Kafka's specific operations.
original
non RX-ified interface using Vert.x codegen.Modifier and Type | Field and Description |
---|---|
static io.vertx.lang.rx.TypeArg<KafkaConsumer> |
__TYPE_ARG |
io.vertx.lang.rx.TypeArg<K> |
__typeArg_0 |
io.vertx.lang.rx.TypeArg<V> |
__typeArg_1 |
Constructor and Description |
---|
KafkaConsumer(KafkaConsumer delegate) |
KafkaConsumer(Object delegate,
io.vertx.lang.rx.TypeArg<K> typeArg_0,
io.vertx.lang.rx.TypeArg<V> typeArg_1) |
Modifier and Type | Method and Description |
---|---|
KafkaConsumer<K,V> |
assign(Set<TopicPartition> topicPartitions)
Manually assign a list of partition to this consumer.
|
KafkaConsumer<K,V> |
assign(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Void>> completionHandler)
Manually assign a list of partition to this consumer.
|
KafkaConsumer<K,V> |
assign(TopicPartition topicPartition)
Manually assign a partition to this consumer.
|
KafkaConsumer<K,V> |
assign(TopicPartition topicPartition,
Handler<AsyncResult<Void>> completionHandler)
Manually assign a partition to this consumer.
|
KafkaConsumer<K,V> |
assignment()
Get the set of partitions currently assigned to this consumer.
|
KafkaConsumer<K,V> |
assignment(Handler<AsyncResult<Set<TopicPartition>>> handler)
Get the set of partitions currently assigned to this consumer.
|
KafkaConsumer<K,V> |
batchHandler(Handler<KafkaConsumerRecords<K,V>> handler)
Set the handler to be used when batches of messages are fetched
from the Kafka server.
|
void |
beginningOffsets(TopicPartition topicPartition)
Get the first offset for the given partitions.
|
void |
beginningOffsets(TopicPartition topicPartition,
Handler<AsyncResult<Long>> handler)
Get the first offset for the given partitions.
|
void |
close()
Close the consumer
|
void |
close(Handler<AsyncResult<Void>> completionHandler)
Close the consumer
|
void |
commit()
Commit current offsets for all the subscribed list of topics and partition.
|
void |
commit(Handler<AsyncResult<Void>> completionHandler)
Commit current offsets for all the subscribed list of topics and partition.
|
void |
committed(TopicPartition topicPartition)
Get the last committed offset for the given partition (whether the commit happened by this process or another).
|
void |
committed(TopicPartition topicPartition,
Handler<AsyncResult<OffsetAndMetadata>> handler)
Get the last committed offset for the given partition (whether the commit happened by this process or another).
|
static <K,V> KafkaConsumer<K,V> |
create(Vertx vertx,
org.apache.kafka.clients.consumer.Consumer<K,V> consumer)
Create a new KafkaConsumer instance from a native .
|
static <K,V> KafkaConsumer<K,V> |
create(Vertx vertx,
org.apache.kafka.clients.consumer.Consumer<K,V> consumer,
KafkaClientOptions options)
Create a new KafkaConsumer instance from a native .
|
static <K,V> KafkaConsumer<K,V> |
create(Vertx vertx,
KafkaClientOptions options)
Create a new KafkaConsumer instance
|
static <K,V> KafkaConsumer<K,V> |
create(Vertx vertx,
KafkaClientOptions options,
Class<K> keyType,
Class<V> valueType)
Create a new KafkaConsumer instance
|
static <K,V> KafkaConsumer<K,V> |
create(Vertx vertx,
Map<String,String> config)
Create a new KafkaConsumer instance
|
static <K,V> KafkaConsumer<K,V> |
create(Vertx vertx,
Map<String,String> config,
Class<K> keyType,
Class<V> valueType)
Create a new KafkaConsumer instance
|
long |
demand()
Returns the current demand.
|
KafkaConsumer<K,V> |
endHandler(Handler<Void> endHandler)
Set an end handler.
|
void |
endOffsets(TopicPartition topicPartition)
Get the last offset for the given partition.
|
void |
endOffsets(TopicPartition topicPartition,
Handler<AsyncResult<Long>> handler)
Get the last offset for the given partition.
|
boolean |
equals(Object o) |
KafkaConsumer<K,V> |
exceptionHandler(Handler<Throwable> handler)
Set an exception handler on the read stream.
|
KafkaConsumer<K,V> |
fetch(long amount)
Fetch the specified
amount of elements. |
KafkaConsumer |
getDelegate() |
KafkaConsumer<K,V> |
handler(Handler<KafkaConsumerRecord<K,V>> handler)
Set a data handler.
|
int |
hashCode() |
static <K,V> KafkaConsumer<K,V> |
newInstance(KafkaConsumer arg) |
static <K,V> KafkaConsumer<K,V> |
newInstance(KafkaConsumer arg,
io.vertx.lang.rx.TypeArg<K> __typeArg_K,
io.vertx.lang.rx.TypeArg<V> __typeArg_V) |
void |
offsetsForTimes(TopicPartition topicPartition,
Long timestamp)
Look up the offset for the given partition by timestamp.
|
void |
offsetsForTimes(TopicPartition topicPartition,
Long timestamp,
Handler<AsyncResult<OffsetAndTimestamp>> handler)
Look up the offset for the given partition by timestamp.
|
KafkaConsumer<K,V> |
partitionsAssignedHandler(Handler<Set<TopicPartition>> handler)
Set the handler called when topic partitions are assigned to the consumer
|
KafkaConsumer<K,V> |
partitionsFor(String topic)
Get metadata about the partitions for a given topic.
|
KafkaConsumer<K,V> |
partitionsFor(String topic,
Handler<AsyncResult<List<PartitionInfo>>> handler)
Get metadata about the partitions for a given topic.
|
KafkaConsumer<K,V> |
partitionsRevokedHandler(Handler<Set<TopicPartition>> handler)
Set the handler called when topic partitions are revoked to the consumer
|
KafkaConsumer<K,V> |
pause()
Pause the
ReadStream , it sets the buffer in fetch mode and clears the actual demand. |
KafkaConsumer<K,V> |
pause(Set<TopicPartition> topicPartitions)
Suspend fetching from the requested partitions.
|
KafkaConsumer<K,V> |
pause(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Void>> completionHandler)
Suspend fetching from the requested partitions.
|
KafkaConsumer<K,V> |
pause(TopicPartition topicPartition)
Suspend fetching from the requested partition.
|
KafkaConsumer<K,V> |
pause(TopicPartition topicPartition,
Handler<AsyncResult<Void>> completionHandler)
Suspend fetching from the requested partition.
|
void |
paused()
Get the set of partitions that were previously paused by a call to pause(Set).
|
void |
paused(Handler<AsyncResult<Set<TopicPartition>>> handler)
Get the set of partitions that were previously paused by a call to pause(Set).
|
Pipe<KafkaConsumerRecord<K,V>> |
pipe()
Pause this stream and return a to transfer the elements of this stream to a destination .
|
void |
pipeTo(WriteStream<KafkaConsumerRecord<K,V>> dst)
Pipe this
ReadStream to the WriteStream . |
void |
pipeTo(WriteStream<KafkaConsumerRecord<K,V>> dst,
Handler<AsyncResult<Void>> handler)
Pipe this
ReadStream to the WriteStream . |
void |
poll(java.time.Duration timeout)
Executes a poll for getting messages from Kafka.
|
void |
poll(java.time.Duration timeout,
Handler<AsyncResult<KafkaConsumerRecords<K,V>>> handler)
Executes a poll for getting messages from Kafka.
|
KafkaConsumer<K,V> |
pollTimeout(java.time.Duration timeout)
Sets the poll timeout for the underlying native Kafka Consumer.
|
void |
position(TopicPartition partition)
Get the offset of the next record that will be fetched (if a record with that offset exists).
|
void |
position(TopicPartition partition,
Handler<AsyncResult<Long>> handler)
Get the offset of the next record that will be fetched (if a record with that offset exists).
|
KafkaConsumer<K,V> |
resume()
Resume reading, and sets the buffer in
flowing mode. |
KafkaConsumer<K,V> |
resume(Set<TopicPartition> topicPartitions)
Resume specified partitions which have been paused with pause.
|
KafkaConsumer<K,V> |
resume(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Void>> completionHandler)
Resume specified partitions which have been paused with pause.
|
KafkaConsumer<K,V> |
resume(TopicPartition topicPartition)
Resume specified partition which have been paused with pause.
|
KafkaConsumer<K,V> |
resume(TopicPartition topicPartition,
Handler<AsyncResult<Void>> completionHandler)
Resume specified partition which have been paused with pause.
|
Completable |
rxAssign(Set<TopicPartition> topicPartitions)
Manually assign a list of partition to this consumer.
|
Completable |
rxAssign(TopicPartition topicPartition)
Manually assign a partition to this consumer.
|
Single<Set<TopicPartition>> |
rxAssignment()
Get the set of partitions currently assigned to this consumer.
|
Single<Long> |
rxBeginningOffsets(TopicPartition topicPartition)
Get the first offset for the given partitions.
|
Completable |
rxClose()
Close the consumer
|
Completable |
rxCommit()
Commit current offsets for all the subscribed list of topics and partition.
|
Single<OffsetAndMetadata> |
rxCommitted(TopicPartition topicPartition)
Get the last committed offset for the given partition (whether the commit happened by this process or another).
|
Single<Long> |
rxEndOffsets(TopicPartition topicPartition)
Get the last offset for the given partition.
|
Single<OffsetAndTimestamp> |
rxOffsetsForTimes(TopicPartition topicPartition,
Long timestamp)
Look up the offset for the given partition by timestamp.
|
Single<List<PartitionInfo>> |
rxPartitionsFor(String topic)
Get metadata about the partitions for a given topic.
|
Completable |
rxPause(Set<TopicPartition> topicPartitions)
Suspend fetching from the requested partitions.
|
Completable |
rxPause(TopicPartition topicPartition)
Suspend fetching from the requested partition.
|
Single<Set<TopicPartition>> |
rxPaused()
Get the set of partitions that were previously paused by a call to pause(Set).
|
Completable |
rxPipeTo(WriteStream<KafkaConsumerRecord<K,V>> dst)
Pipe this
ReadStream to the WriteStream . |
Single<KafkaConsumerRecords<K,V>> |
rxPoll(java.time.Duration timeout)
Executes a poll for getting messages from Kafka.
|
Single<Long> |
rxPosition(TopicPartition partition)
Get the offset of the next record that will be fetched (if a record with that offset exists).
|
Completable |
rxResume(Set<TopicPartition> topicPartitions)
Resume specified partitions which have been paused with pause.
|
Completable |
rxResume(TopicPartition topicPartition)
Resume specified partition which have been paused with pause.
|
Completable |
rxSeek(TopicPartition topicPartition,
long offset)
Overrides the fetch offsets that the consumer will use on the next poll.
|
Completable |
rxSeekToBeginning(Set<TopicPartition> topicPartitions)
Seek to the first offset for each of the given partitions.
|
Completable |
rxSeekToBeginning(TopicPartition topicPartition)
Seek to the first offset for each of the given partition.
|
Completable |
rxSeekToEnd(Set<TopicPartition> topicPartitions)
Seek to the last offset for each of the given partitions.
|
Completable |
rxSeekToEnd(TopicPartition topicPartition)
Seek to the last offset for each of the given partition.
|
Completable |
rxSubscribe(Set<String> topics)
Subscribe to the given list of topics to get dynamically assigned partitions.
|
Completable |
rxSubscribe(String topic)
Subscribe to the given topic to get dynamically assigned partitions.
|
Single<Set<String>> |
rxSubscription()
Get the current subscription.
|
Completable |
rxUnsubscribe()
Unsubscribe from topics currently subscribed with subscribe.
|
KafkaConsumer<K,V> |
seek(TopicPartition topicPartition,
long offset)
Overrides the fetch offsets that the consumer will use on the next poll.
|
KafkaConsumer<K,V> |
seek(TopicPartition topicPartition,
long offset,
Handler<AsyncResult<Void>> completionHandler)
Overrides the fetch offsets that the consumer will use on the next poll.
|
KafkaConsumer<K,V> |
seekToBeginning(Set<TopicPartition> topicPartitions)
Seek to the first offset for each of the given partitions.
|
KafkaConsumer<K,V> |
seekToBeginning(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Void>> completionHandler)
Seek to the first offset for each of the given partitions.
|
KafkaConsumer<K,V> |
seekToBeginning(TopicPartition topicPartition)
Seek to the first offset for each of the given partition.
|
KafkaConsumer<K,V> |
seekToBeginning(TopicPartition topicPartition,
Handler<AsyncResult<Void>> completionHandler)
Seek to the first offset for each of the given partition.
|
KafkaConsumer<K,V> |
seekToEnd(Set<TopicPartition> topicPartitions)
Seek to the last offset for each of the given partitions.
|
KafkaConsumer<K,V> |
seekToEnd(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Void>> completionHandler)
Seek to the last offset for each of the given partitions.
|
KafkaConsumer<K,V> |
seekToEnd(TopicPartition topicPartition)
Seek to the last offset for each of the given partition.
|
KafkaConsumer<K,V> |
seekToEnd(TopicPartition topicPartition,
Handler<AsyncResult<Void>> completionHandler)
Seek to the last offset for each of the given partition.
|
KafkaConsumer<K,V> |
subscribe(Set<String> topics)
Subscribe to the given list of topics to get dynamically assigned partitions.
|
KafkaConsumer<K,V> |
subscribe(Set<String> topics,
Handler<AsyncResult<Void>> completionHandler)
Subscribe to the given list of topics to get dynamically assigned partitions.
|
KafkaConsumer<K,V> |
subscribe(String topic)
Subscribe to the given topic to get dynamically assigned partitions.
|
KafkaConsumer<K,V> |
subscribe(String topic,
Handler<AsyncResult<Void>> completionHandler)
Subscribe to the given topic to get dynamically assigned partitions.
|
KafkaConsumer<K,V> |
subscription()
Get the current subscription.
|
KafkaConsumer<K,V> |
subscription(Handler<AsyncResult<Set<String>>> handler)
Get the current subscription.
|
Flowable<KafkaConsumerRecord<K,V>> |
toFlowable() |
Observable<KafkaConsumerRecord<K,V>> |
toObservable() |
String |
toString() |
KafkaConsumer<K,V> |
unsubscribe()
Unsubscribe from topics currently subscribed with subscribe.
|
KafkaConsumer<K,V> |
unsubscribe(Handler<AsyncResult<Void>> completionHandler)
Unsubscribe from topics currently subscribed with subscribe.
|
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
newInstance, newInstance
newInstance
public static final io.vertx.lang.rx.TypeArg<KafkaConsumer> __TYPE_ARG
public final io.vertx.lang.rx.TypeArg<K> __typeArg_0
public final io.vertx.lang.rx.TypeArg<V> __typeArg_1
public KafkaConsumer(KafkaConsumer delegate)
public KafkaConsumer getDelegate()
getDelegate
in interface ReadStream<KafkaConsumerRecord<K,V>>
getDelegate
in interface StreamBase
public Observable<KafkaConsumerRecord<K,V>> toObservable()
toObservable
in interface ReadStream<KafkaConsumerRecord<K,V>>
public Flowable<KafkaConsumerRecord<K,V>> toFlowable()
toFlowable
in interface ReadStream<KafkaConsumerRecord<K,V>>
public Pipe<KafkaConsumerRecord<K,V>> pipe()
WriteStream
.pipe
in interface ReadStream<KafkaConsumerRecord<K,V>>
public void pipeTo(WriteStream<KafkaConsumerRecord<K,V>> dst, Handler<AsyncResult<Void>> handler)
ReadStream
to the WriteStream
.
Elements emitted by this stream will be written to the write stream until this stream ends or fails.
Once this stream has ended or failed, the write stream will be ended and the handler
will be
called with the result.
pipeTo
in interface ReadStream<KafkaConsumerRecord<K,V>>
dst
- the destination write streamhandler
- public void pipeTo(WriteStream<KafkaConsumerRecord<K,V>> dst)
ReadStream
to the WriteStream
.
Elements emitted by this stream will be written to the write stream until this stream ends or fails.
Once this stream has ended or failed, the write stream will be ended and the handler
will be
called with the result.
pipeTo
in interface ReadStream<KafkaConsumerRecord<K,V>>
dst
- the destination write streampublic Completable rxPipeTo(WriteStream<KafkaConsumerRecord<K,V>> dst)
ReadStream
to the WriteStream
.
Elements emitted by this stream will be written to the write stream until this stream ends or fails.
Once this stream has ended or failed, the write stream will be ended and the handler
will be
called with the result.
rxPipeTo
in interface ReadStream<KafkaConsumerRecord<K,V>>
dst
- the destination write streampublic static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Map<String,String> config)
vertx
- Vert.x instance to useconfig
- Kafka consumer configurationpublic static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Map<String,String> config, Class<K> keyType, Class<V> valueType)
vertx
- Vert.x instance to useconfig
- Kafka consumer configurationkeyType
- class type for the key deserializationvalueType
- class type for the value deserializationpublic static <K,V> KafkaConsumer<K,V> create(Vertx vertx, KafkaClientOptions options)
vertx
- Vert.x instance to useoptions
- Kafka consumer optionspublic static <K,V> KafkaConsumer<K,V> create(Vertx vertx, KafkaClientOptions options, Class<K> keyType, Class<V> valueType)
vertx
- Vert.x instance to useoptions
- Kafka consumer optionskeyType
- class type for the key deserializationvalueType
- class type for the value deserializationpublic KafkaConsumer<K,V> exceptionHandler(Handler<Throwable> handler)
ReadStream
exceptionHandler
in interface ReadStream<KafkaConsumerRecord<K,V>>
exceptionHandler
in interface StreamBase
handler
- the exception handlerpublic KafkaConsumer<K,V> handler(Handler<KafkaConsumerRecord<K,V>> handler)
ReadStream
handler
in interface ReadStream<KafkaConsumerRecord<K,V>>
public KafkaConsumer<K,V> pause()
ReadStream
ReadStream
, it sets the buffer in fetch
mode and clears the actual demand.
While it's paused, no data will be sent to the data handler
.
pause
in interface ReadStream<KafkaConsumerRecord<K,V>>
public KafkaConsumer<K,V> resume()
ReadStream
flowing
mode.
If the ReadStream
has been paused, reading will recommence on it.resume
in interface ReadStream<KafkaConsumerRecord<K,V>>
public KafkaConsumer<K,V> fetch(long amount)
ReadStream
amount
of elements. If the ReadStream
has been paused, reading will
recommence with the specified amount
of items, otherwise the specified amount
will
be added to the current stream demand.fetch
in interface ReadStream<KafkaConsumerRecord<K,V>>
public KafkaConsumer<K,V> endHandler(Handler<Void> endHandler)
ReadStream
endHandler
in interface ReadStream<KafkaConsumerRecord<K,V>>
public long demand()
Long
.
public KafkaConsumer<K,V> subscribe(String topic, Handler<AsyncResult<Void>> completionHandler)
Due to internal buffering of messages, when changing the subscribed topic
the old topic may remain in effect
(as observed by the record handler})
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new topic.
topic
- topic to subscribe tocompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> subscribe(String topic)
Due to internal buffering of messages, when changing the subscribed topic
the old topic may remain in effect
(as observed by the record handler})
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new topic.
topic
- topic to subscribe topublic Completable rxSubscribe(String topic)
Due to internal buffering of messages, when changing the subscribed topic
the old topic may remain in effect
(as observed by the record handler})
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new topic.
topic
- topic to subscribe topublic KafkaConsumer<K,V> subscribe(Set<String> topics, Handler<AsyncResult<Void>> completionHandler)
Due to internal buffering of messages, when changing the subscribed topics
the old set of topics may remain in effect
(as observed by the record handler})
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new set of topics.
topics
- topics to subscribe tocompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> subscribe(Set<String> topics)
Due to internal buffering of messages, when changing the subscribed topics
the old set of topics may remain in effect
(as observed by the record handler})
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new set of topics.
topics
- topics to subscribe topublic Completable rxSubscribe(Set<String> topics)
Due to internal buffering of messages, when changing the subscribed topics
the old set of topics may remain in effect
(as observed by the record handler})
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new set of topics.
topics
- topics to subscribe topublic KafkaConsumer<K,V> assign(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
Due to internal buffering of messages, when reassigning
the old partition may remain in effect
(as observed by the record handler)}
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new partition.
topicPartition
- partition which want assignedcompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> assign(TopicPartition topicPartition)
Due to internal buffering of messages, when reassigning
the old partition may remain in effect
(as observed by the record handler)}
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new partition.
topicPartition
- partition which want assignedpublic Completable rxAssign(TopicPartition topicPartition)
Due to internal buffering of messages, when reassigning
the old partition may remain in effect
(as observed by the record handler)}
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new partition.
topicPartition
- partition which want assignedpublic KafkaConsumer<K,V> assign(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
Due to internal buffering of messages, when reassigning
the old set of partitions may remain in effect
(as observed by the record handler)}
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new set of partitions.
topicPartitions
- partitions which want assignedcompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> assign(Set<TopicPartition> topicPartitions)
Due to internal buffering of messages, when reassigning
the old set of partitions may remain in effect
(as observed by the record handler)}
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new set of partitions.
topicPartitions
- partitions which want assignedpublic Completable rxAssign(Set<TopicPartition> topicPartitions)
Due to internal buffering of messages, when reassigning
the old set of partitions may remain in effect
(as observed by the record handler)}
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new set of partitions.
topicPartitions
- partitions which want assignedpublic KafkaConsumer<K,V> assignment(Handler<AsyncResult<Set<TopicPartition>>> handler)
handler
- handler called on operation completedpublic KafkaConsumer<K,V> assignment()
public Single<Set<TopicPartition>> rxAssignment()
public KafkaConsumer<K,V> unsubscribe(Handler<AsyncResult<Void>> completionHandler)
completionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> unsubscribe()
public Completable rxUnsubscribe()
public KafkaConsumer<K,V> subscription(Handler<AsyncResult<Set<String>>> handler)
handler
- handler called on operation completedpublic KafkaConsumer<K,V> subscription()
public Single<Set<String>> rxSubscription()
public KafkaConsumer<K,V> pause(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
Due to internal buffering of messages,
the will
continue to observe messages from the given topicPartition
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will not see messages
from the given topicPartition
.
topicPartition
- topic partition from which suspend fetchingcompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> pause(TopicPartition topicPartition)
Due to internal buffering of messages,
the will
continue to observe messages from the given topicPartition
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will not see messages
from the given topicPartition
.
topicPartition
- topic partition from which suspend fetchingpublic Completable rxPause(TopicPartition topicPartition)
Due to internal buffering of messages,
the will
continue to observe messages from the given topicPartition
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will not see messages
from the given topicPartition
.
topicPartition
- topic partition from which suspend fetchingpublic KafkaConsumer<K,V> pause(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
Due to internal buffering of messages,
the will
continue to observe messages from the given topicPartitions
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will not see messages
from the given topicPartitions
.
topicPartitions
- topic partition from which suspend fetchingcompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> pause(Set<TopicPartition> topicPartitions)
Due to internal buffering of messages,
the will
continue to observe messages from the given topicPartitions
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will not see messages
from the given topicPartitions
.
topicPartitions
- topic partition from which suspend fetchingpublic Completable rxPause(Set<TopicPartition> topicPartitions)
Due to internal buffering of messages,
the will
continue to observe messages from the given topicPartitions
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will not see messages
from the given topicPartitions
.
topicPartitions
- topic partition from which suspend fetchingpublic void paused(Handler<AsyncResult<Set<TopicPartition>>> handler)
handler
- handler called on operation completedpublic void paused()
public Single<Set<TopicPartition>> rxPaused()
public KafkaConsumer<K,V> resume(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
topicPartition
- topic partition from which resume fetchingcompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> resume(TopicPartition topicPartition)
topicPartition
- topic partition from which resume fetchingpublic Completable rxResume(TopicPartition topicPartition)
topicPartition
- topic partition from which resume fetchingpublic KafkaConsumer<K,V> resume(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
topicPartitions
- topic partition from which resume fetchingcompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> resume(Set<TopicPartition> topicPartitions)
topicPartitions
- topic partition from which resume fetchingpublic Completable rxResume(Set<TopicPartition> topicPartitions)
topicPartitions
- topic partition from which resume fetchingpublic KafkaConsumer<K,V> partitionsRevokedHandler(Handler<Set<TopicPartition>> handler)
handler
- handler called on revoked topic partitionspublic KafkaConsumer<K,V> partitionsAssignedHandler(Handler<Set<TopicPartition>> handler)
handler
- handler called on assigned topic partitionspublic KafkaConsumer<K,V> seek(TopicPartition topicPartition, long offset, Handler<AsyncResult<Void>> completionHandler)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartition
- topic partition for which seekoffset
- offset to seek inside the topic partitioncompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> seek(TopicPartition topicPartition, long offset)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartition
- topic partition for which seekoffset
- offset to seek inside the topic partitionpublic Completable rxSeek(TopicPartition topicPartition, long offset)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartition
- topic partition for which seekoffset
- offset to seek inside the topic partitionpublic KafkaConsumer<K,V> seekToBeginning(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartition
- topic partition for which seekcompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> seekToBeginning(TopicPartition topicPartition)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartition
- topic partition for which seekpublic Completable rxSeekToBeginning(TopicPartition topicPartition)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartition
- topic partition for which seekpublic KafkaConsumer<K,V> seekToBeginning(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartitions
- topic partition for which seekcompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> seekToBeginning(Set<TopicPartition> topicPartitions)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartitions
- topic partition for which seekpublic Completable rxSeekToBeginning(Set<TopicPartition> topicPartitions)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartitions
- topic partition for which seekpublic KafkaConsumer<K,V> seekToEnd(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartition
- topic partition for which seekcompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> seekToEnd(TopicPartition topicPartition)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartition
- topic partition for which seekpublic Completable rxSeekToEnd(TopicPartition topicPartition)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartition
- topic partition for which seekpublic KafkaConsumer<K,V> seekToEnd(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartitions
- topic partition for which seekcompletionHandler
- handler called on operation completedpublic KafkaConsumer<K,V> seekToEnd(Set<TopicPartition> topicPartitions)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartitions
- topic partition for which seekpublic Completable rxSeekToEnd(Set<TopicPartition> topicPartitions)
Due to internal buffering of messages,
the will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the batchHandler(io.vertx.core.Handler<io.vertx.reactivex.kafka.client.consumer.KafkaConsumerRecords<K, V>>)
will only see messages
consistent with the new offset.
topicPartitions
- topic partition for which seekpublic void commit(Handler<AsyncResult<Void>> completionHandler)
completionHandler
- handler called on operation completedpublic void commit()
public Completable rxCommit()
public void committed(TopicPartition topicPartition, Handler<AsyncResult<OffsetAndMetadata>> handler)
topicPartition
- topic partition for getting last committed offsethandler
- handler called on operation completedpublic void committed(TopicPartition topicPartition)
topicPartition
- topic partition for getting last committed offsetpublic Single<OffsetAndMetadata> rxCommitted(TopicPartition topicPartition)
topicPartition
- topic partition for getting last committed offsetpublic KafkaConsumer<K,V> partitionsFor(String topic, Handler<AsyncResult<List<PartitionInfo>>> handler)
topic
- topic partition for which getting partitions infohandler
- handler called on operation completedpublic KafkaConsumer<K,V> partitionsFor(String topic)
topic
- topic partition for which getting partitions infopublic Single<List<PartitionInfo>> rxPartitionsFor(String topic)
topic
- topic partition for which getting partitions infopublic KafkaConsumer<K,V> batchHandler(Handler<KafkaConsumerRecords<K,V>> handler)
#handler(Handler) record handler
.handler
- handler called when batches of messages are fetchedpublic void close(Handler<AsyncResult<Void>> completionHandler)
completionHandler
- handler called on operation completedpublic void close()
public Completable rxClose()
public void position(TopicPartition partition, Handler<AsyncResult<Long>> handler)
partition
- The partition to get the position forhandler
- handler called on operation completedpublic void position(TopicPartition partition)
partition
- The partition to get the position forpublic Single<Long> rxPosition(TopicPartition partition)
partition
- The partition to get the position forpublic void offsetsForTimes(TopicPartition topicPartition, Long timestamp, Handler<AsyncResult<OffsetAndTimestamp>> handler)
topicPartition
- TopicPartition to query.timestamp
- Timestamp to be used in the query.handler
- handler called on operation completedpublic void offsetsForTimes(TopicPartition topicPartition, Long timestamp)
topicPartition
- TopicPartition to query.timestamp
- Timestamp to be used in the query.public Single<OffsetAndTimestamp> rxOffsetsForTimes(TopicPartition topicPartition, Long timestamp)
topicPartition
- TopicPartition to query.timestamp
- Timestamp to be used in the query.public void beginningOffsets(TopicPartition topicPartition, Handler<AsyncResult<Long>> handler)
topicPartition
- the partition to get the earliest offset.handler
- handler called on operation completed. Returns the earliest available offset for the given partitionpublic void beginningOffsets(TopicPartition topicPartition)
topicPartition
- the partition to get the earliest offset.public Single<Long> rxBeginningOffsets(TopicPartition topicPartition)
topicPartition
- the partition to get the earliest offset.public void endOffsets(TopicPartition topicPartition, Handler<AsyncResult<Long>> handler)
topicPartition
- the partition to get the end offset.handler
- handler called on operation completed. The end offset for the given partition.public void endOffsets(TopicPartition topicPartition)
topicPartition
- the partition to get the end offset.public Single<Long> rxEndOffsets(TopicPartition topicPartition)
topicPartition
- the partition to get the end offset.public static <K,V> KafkaConsumer<K,V> create(Vertx vertx, org.apache.kafka.clients.consumer.Consumer<K,V> consumer)
vertx
- Vert.x instance to useconsumer
- the Kafka consumer to wrappublic static <K,V> KafkaConsumer<K,V> create(Vertx vertx, org.apache.kafka.clients.consumer.Consumer<K,V> consumer, KafkaClientOptions options)
vertx
- Vert.x instance to useconsumer
- the Kafka consumer to wrapoptions
- options used only for tracing settingspublic KafkaConsumer<K,V> pollTimeout(java.time.Duration timeout)
timeout
- The time, spent waiting in poll if data is not available in the buffer. If 0, returns immediately with any records that are available currently in the native Kafka consumer's buffer, else returns empty. Must not be negative.public void poll(java.time.Duration timeout, Handler<AsyncResult<KafkaConsumerRecords<K,V>>> handler)
timeout
- The maximum time to block (must not be greater than Long
milliseconds)handler
- handler called after the poll with batch of records (can be empty).public void poll(java.time.Duration timeout)
timeout
- The maximum time to block (must not be greater than Long
milliseconds)public Single<KafkaConsumerRecords<K,V>> rxPoll(java.time.Duration timeout)
timeout
- The maximum time to block (must not be greater than Long
milliseconds)public static <K,V> KafkaConsumer<K,V> newInstance(KafkaConsumer arg)
public static <K,V> KafkaConsumer<K,V> newInstance(KafkaConsumer arg, io.vertx.lang.rx.TypeArg<K> __typeArg_K, io.vertx.lang.rx.TypeArg<V> __typeArg_V)
Copyright © 2024 Eclipse. All rights reserved.