Interface KafkaConsumer<K,V>
-
- All Superinterfaces:
ReadStream<KafkaConsumerRecord<K,V>>
,StreamBase
public interface KafkaConsumer<K,V> extends ReadStream<KafkaConsumerRecord<K,V>>
Vert.x Kafka consumer.You receive Kafka records by providing a
handler(Handler)
. As messages arrive the handler will be called with the records.The
pause()
andresume()
provides global control over reading the records from the consumer.The
pause(Set)
andresume(Set)
provides finer grained control over reading records for specific Topic/Partition, these are Kafka's specific operations.
-
-
Method Summary
All Methods Static Methods Instance Methods Abstract Methods Modifier and Type Method Description Future<Void>
assign(TopicPartition topicPartition)
Manually assign a partition to this consumer.Future<Void>
assign(Set<TopicPartition> topicPartitions)
Manually assign a list of partition to this consumer.Future<Set<TopicPartition>>
assignment()
Get the set of partitions currently assigned to this consumer.KafkaReadStream<K,V>
asStream()
KafkaConsumer<K,V>
batchHandler(Handler<KafkaConsumerRecords<K,V>> handler)
Set the handler to be used when batches of messages are fetched from the Kafka server.Future<Long>
beginningOffsets(TopicPartition topicPartition)
Get the first offset for the given partitions.Future<Map<TopicPartition,Long>>
beginningOffsets(Set<TopicPartition> topicPartitions)
Get the first offset for the given partitions.Future<Void>
close()
Close the consumerFuture<Void>
commit()
Commit current offsets for all the subscribed list of topics and partition.Future<Map<TopicPartition,OffsetAndMetadata>>
commit(Map<TopicPartition,OffsetAndMetadata> offsets)
Commit the specified offsets for the specified list of topics and partitions to Kafka.Future<OffsetAndMetadata>
committed(TopicPartition topicPartition)
Get the last committed offset for the given partition (whether the commit happened by this process or another).static <K,V>
KafkaConsumer<K,V>create(Vertx vertx, KafkaClientOptions options)
Create a new KafkaConsumer instancestatic <K,V>
KafkaConsumer<K,V>create(Vertx vertx, KafkaClientOptions options, Class<K> keyType, Class<V> valueType)
Create a new KafkaConsumer instancestatic <K,V>
KafkaConsumer<K,V>create(Vertx vertx, KafkaClientOptions options, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer)
Create a new KafkaConsumer instancestatic <K,V>
KafkaConsumer<K,V>create(Vertx vertx, Map<String,String> config)
Create a new KafkaConsumer instancestatic <K,V>
KafkaConsumer<K,V>create(Vertx vertx, Map<String,String> config, Class<K> keyType, Class<V> valueType)
Create a new KafkaConsumer instancestatic <K,V>
KafkaConsumer<K,V>create(Vertx vertx, Map<String,String> config, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer)
Create a new KafkaConsumer instancestatic <K,V>
KafkaConsumer<K,V>create(Vertx vertx, Properties config)
Create a new KafkaConsumer instancestatic <K,V>
KafkaConsumer<K,V>create(Vertx vertx, Properties config, Class<K> keyType, Class<V> valueType)
Create a new KafkaConsumer instancestatic <K,V>
KafkaConsumer<K,V>create(Vertx vertx, Properties config, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer)
Create a new KafkaConsumer instancestatic <K,V>
KafkaConsumer<K,V>create(Vertx vertx, org.apache.kafka.clients.consumer.Consumer<K,V> consumer)
Create a new KafkaConsumer instance from a nativeConsumer
.static <K,V>
KafkaConsumer<K,V>create(Vertx vertx, org.apache.kafka.clients.consumer.Consumer<K,V> consumer, KafkaClientOptions options)
Create a new KafkaConsumer instance from a nativeConsumer
.long
demand()
Returns the current demand.KafkaConsumer<K,V>
endHandler(Handler<Void> endHandler)
Set an end handler.Future<Long>
endOffsets(TopicPartition topicPartition)
Get the last offset for the given partition.Future<Map<TopicPartition,Long>>
endOffsets(Set<TopicPartition> topicPartitions)
Get the last offset for the given partitions.KafkaConsumer<K,V>
exceptionHandler(Handler<Throwable> handler)
Set an exception handler on the read stream.KafkaConsumer<K,V>
fetch(long amount)
Fetch the specifiedamount
of elements.KafkaConsumer<K,V>
handler(Handler<KafkaConsumerRecord<K,V>> handler)
Set a data handler.Future<Map<String,List<PartitionInfo>>>
listTopics()
Get metadata about partitions for all topics that the user is authorized to view.Future<OffsetAndTimestamp>
offsetsForTimes(TopicPartition topicPartition, Long timestamp)
Look up the offset for the given partition by timestamp.Future<Map<TopicPartition,OffsetAndTimestamp>>
offsetsForTimes(Map<TopicPartition,Long> topicPartitionTimestamps)
Look up the offsets for the given partitions by timestamp.KafkaConsumer<K,V>
partitionsAssignedHandler(Handler<Set<TopicPartition>> handler)
Set the handler called when topic partitions are assigned to the consumerFuture<List<PartitionInfo>>
partitionsFor(String topic)
Get metadata about the partitions for a given topic.KafkaConsumer<K,V>
partitionsRevokedHandler(Handler<Set<TopicPartition>> handler)
Set the handler called when topic partitions are revoked to the consumerKafkaConsumer<K,V>
pause()
Pause theReadStream
, it sets the buffer infetch
mode and clears the actual demand.Future<Void>
pause(TopicPartition topicPartition)
Suspend fetching from the requested partition.Future<Void>
pause(Set<TopicPartition> topicPartitions)
Suspend fetching from the requested partitions.Future<Set<TopicPartition>>
paused()
Get the set of partitions that were previously paused by a call to pause(Set).Future<KafkaConsumerRecords<K,V>>
poll(java.time.Duration timeout)
Executes a poll for getting messages from Kafka.KafkaConsumer<K,V>
pollTimeout(java.time.Duration timeout)
Sets the poll timeout for the underlying native Kafka Consumer.Future<Long>
position(TopicPartition partition)
Get the offset of the next record that will be fetched (if a record with that offset exists).KafkaConsumer<K,V>
resume()
Resume reading, and sets the buffer inflowing
mode.Future<Void>
resume(TopicPartition topicPartition)
Resume specified partition which have been paused with pause.Future<Void>
resume(Set<TopicPartition> topicPartitions)
Resume specified partitions which have been paused with pause.Future<Void>
seek(TopicPartition topicPartition, long offset)
Overrides the fetch offsets that the consumer will use on the next poll.Future<Void>
seek(TopicPartition topicPartition, OffsetAndMetadata offsetAndMetadata)
Overrides the fetch offsets that the consumer will use on the next poll.Future<Void>
seekToBeginning(TopicPartition topicPartition)
Seek to the first offset for each of the given partition.Future<Void>
seekToBeginning(Set<TopicPartition> topicPartitions)
Seek to the first offset for each of the given partitions.Future<Void>
seekToEnd(TopicPartition topicPartition)
Seek to the last offset for each of the given partition.Future<Void>
seekToEnd(Set<TopicPartition> topicPartitions)
Seek to the last offset for each of the given partitions.Future<Void>
subscribe(String topic)
Subscribe to the given topic to get dynamically assigned partitions.Future<Void>
subscribe(Pattern pattern)
Subscribe to all topics matching specified pattern to get dynamically assigned partitions.Future<Void>
subscribe(Set<String> topics)
Subscribe to the given list of topics to get dynamically assigned partitions.Future<Set<String>>
subscription()
Get the current subscription.Future<Void>
unsubscribe()
Unsubscribe from topics currently subscribed with subscribe.org.apache.kafka.clients.consumer.Consumer<K,V>
unwrap()
-
Methods inherited from interface io.vertx.core.streams.ReadStream
collect, pipe, pipeTo
-
-
-
-
Method Detail
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, org.apache.kafka.clients.consumer.Consumer<K,V> consumer)
Create a new KafkaConsumer instance from a nativeConsumer
.- Parameters:
vertx
- Vert.x instance to useconsumer
- the Kafka consumer to wrap- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, org.apache.kafka.clients.consumer.Consumer<K,V> consumer, KafkaClientOptions options)
Create a new KafkaConsumer instance from a nativeConsumer
.- Parameters:
vertx
- Vert.x instance to useconsumer
- the Kafka consumer to wrapoptions
- options used only for tracing settings- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Map<String,String> config)
Create a new KafkaConsumer instance- Parameters:
vertx
- Vert.x instance to useconfig
- Kafka consumer configuration- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Map<String,String> config, Class<K> keyType, Class<V> valueType)
Create a new KafkaConsumer instance- Parameters:
vertx
- Vert.x instance to useconfig
- Kafka consumer configurationkeyType
- class type for the key deserializationvalueType
- class type for the value deserialization- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Map<String,String> config, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer)
Create a new KafkaConsumer instance- Parameters:
vertx
- Vert.x instance to useconfig
- Kafka consumer configurationkeyDeserializer
- key deserializervalueDeserializer
- value deserializer- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, KafkaClientOptions options)
Create a new KafkaConsumer instance- Parameters:
vertx
- Vert.x instance to useoptions
- Kafka consumer options- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, KafkaClientOptions options, Class<K> keyType, Class<V> valueType)
Create a new KafkaConsumer instance- Parameters:
vertx
- Vert.x instance to useoptions
- Kafka consumer optionskeyType
- class type for the key deserializationvalueType
- class type for the value deserialization- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, KafkaClientOptions options, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer)
Create a new KafkaConsumer instance- Parameters:
vertx
- Vert.x instance to useoptions
- Kafka consumer optionskeyDeserializer
- key deserializervalueDeserializer
- value deserializer- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Properties config)
Create a new KafkaConsumer instance- Parameters:
vertx
- Vert.x instance to useconfig
- Kafka consumer configuration- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Properties config, Class<K> keyType, Class<V> valueType)
Create a new KafkaConsumer instance- Parameters:
vertx
- Vert.x instance to useconfig
- Kafka consumer configurationkeyType
- class type for the key deserializationvalueType
- class type for the value deserialization- Returns:
- an instance of the KafkaConsumer
-
create
static <K,V> KafkaConsumer<K,V> create(Vertx vertx, Properties config, org.apache.kafka.common.serialization.Deserializer<K> keyDeserializer, org.apache.kafka.common.serialization.Deserializer<V> valueDeserializer)
Create a new KafkaConsumer instance- Parameters:
vertx
- Vert.x instance to useconfig
- Kafka consumer configurationkeyDeserializer
- key deserializervalueDeserializer
- value deserializer- Returns:
- an instance of the KafkaConsumer
-
exceptionHandler
KafkaConsumer<K,V> exceptionHandler(Handler<Throwable> handler)
Description copied from interface:ReadStream
Set an exception handler on the read stream.- Specified by:
exceptionHandler
in interfaceReadStream<K>
- Specified by:
exceptionHandler
in interfaceStreamBase
- Parameters:
handler
- the exception handler- Returns:
- a reference to this, so the API can be used fluently
-
handler
KafkaConsumer<K,V> handler(Handler<KafkaConsumerRecord<K,V>> handler)
Description copied from interface:ReadStream
Set a data handler. As data is read, the handler will be called with the data.- Specified by:
handler
in interfaceReadStream<K>
- Returns:
- a reference to this, so the API can be used fluently
-
pause
KafkaConsumer<K,V> pause()
Description copied from interface:ReadStream
Pause theReadStream
, it sets the buffer infetch
mode and clears the actual demand.While it's paused, no data will be sent to the data
handler
.- Specified by:
pause
in interfaceReadStream<K>
- Returns:
- a reference to this, so the API can be used fluently
-
resume
KafkaConsumer<K,V> resume()
Description copied from interface:ReadStream
Resume reading, and sets the buffer inflowing
mode. If theReadStream
has been paused, reading will recommence on it.- Specified by:
resume
in interfaceReadStream<K>
- Returns:
- a reference to this, so the API can be used fluently
-
fetch
KafkaConsumer<K,V> fetch(long amount)
Description copied from interface:ReadStream
Fetch the specifiedamount
of elements. If theReadStream
has been paused, reading will recommence with the specifiedamount
of items, otherwise the specifiedamount
will be added to the current stream demand.- Specified by:
fetch
in interfaceReadStream<K>
- Returns:
- a reference to this, so the API can be used fluently
-
endHandler
KafkaConsumer<K,V> endHandler(Handler<Void> endHandler)
Description copied from interface:ReadStream
Set an end handler. Once the stream has ended, and there is no more data to be read, this handler will be called.- Specified by:
endHandler
in interfaceReadStream<K>
- Returns:
- a reference to this, so the API can be used fluently
-
demand
long demand()
Returns the current demand.-
If the stream is in flowing mode will return
- If the stream is in fetch mode, will return the current number of elements still to be delivered or 0 if paused.
Long.MAX_VALUE
.- Returns:
- current demand
-
subscribe
Future<Void> subscribe(String topic)
Subscribe to the given topic to get dynamically assigned partitions.Due to internal buffering of messages, when changing the subscribed topic the old topic may remain in effect (as observed by the handler(Handler) record handler}) until some time after the given
completionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will only see messages consistent with the new topic.- Parameters:
topic
- topic to subscribe to- Returns:
- a
Future
completed with the operation result
-
subscribe
Future<Void> subscribe(Set<String> topics)
Subscribe to the given list of topics to get dynamically assigned partitions.Due to internal buffering of messages, when changing the subscribed topics the old set of topics may remain in effect (as observed by the handler(Handler) record handler}) until some time after the given
completionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will only see messages consistent with the new set of topics.- Parameters:
topics
- topics to subscribe to- Returns:
- a
Future
completed with the operation result
-
subscribe
Future<Void> subscribe(Pattern pattern)
Subscribe to all topics matching specified pattern to get dynamically assigned partitions.Due to internal buffering of messages, when changing the subscribed topics the old set of topics may remain in effect (as observed by the handler(Handler) record handler}) until some time after the given
completionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will only see messages consistent with the new set of topics.- Parameters:
pattern
- Pattern to subscribe to- Returns:
- a
Future
completed with the operation result
-
assign
Future<Void> assign(TopicPartition topicPartition)
Manually assign a partition to this consumer.Due to internal buffering of messages, when reassigning the old partition may remain in effect (as observed by the handler(Handler) record handler)} until some time after the given
completionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will only see messages consistent with the new partition.- Parameters:
topicPartition
- partition which want assigned- Returns:
- a
Future
completed with the operation result
-
assign
Future<Void> assign(Set<TopicPartition> topicPartitions)
Manually assign a list of partition to this consumer.Due to internal buffering of messages, when reassigning the old set of partitions may remain in effect (as observed by the handler(Handler) record handler)} until some time after the given
completionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will only see messages consistent with the new set of partitions.- Parameters:
topicPartitions
- partitions which want assigned- Returns:
- a
Future
completed with the operation result
-
assignment
Future<Set<TopicPartition>> assignment()
Get the set of partitions currently assigned to this consumer.- Returns:
- a future notified on operation completed
-
listTopics
Future<Map<String,List<PartitionInfo>>> listTopics()
Get metadata about partitions for all topics that the user is authorized to view.- Returns:
- a future notified on operation completed
-
unsubscribe
Future<Void> unsubscribe()
Unsubscribe from topics currently subscribed with subscribe.- Returns:
- a
Future
completed with the operation result
-
subscription
Future<Set<String>> subscription()
Get the current subscription.- Returns:
- a future notified on operation completed
-
pause
Future<Void> pause(TopicPartition topicPartition)
Suspend fetching from the requested partition.Due to internal buffering of messages, the record handler will continue to observe messages from the given
topicPartition
until some time after the givencompletionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will not see messages from the giventopicPartition
.- Parameters:
topicPartition
- topic partition from which suspend fetching- Returns:
- a
Future
completed with the operation result
-
pause
Future<Void> pause(Set<TopicPartition> topicPartitions)
Suspend fetching from the requested partitions.Due to internal buffering of messages, the record handler will continue to observe messages from the given
topicPartitions
until some time after the givencompletionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will not see messages from the giventopicPartitions
.- Parameters:
topicPartitions
- topic partition from which suspend fetching- Returns:
- a
Future
completed with the operation result
-
paused
Future<Set<TopicPartition>> paused()
Get the set of partitions that were previously paused by a call to pause(Set).- Returns:
- a future notified on operation completed
-
resume
Future<Void> resume(TopicPartition topicPartition)
Resume specified partition which have been paused with pause.- Parameters:
topicPartition
- topic partition from which resume fetching- Returns:
- a
Future
completed with the operation result
-
resume
Future<Void> resume(Set<TopicPartition> topicPartitions)
Resume specified partitions which have been paused with pause.- Parameters:
topicPartitions
- topic partition from which resume fetching- Returns:
- a
Future
completed with the operation result
-
partitionsRevokedHandler
KafkaConsumer<K,V> partitionsRevokedHandler(Handler<Set<TopicPartition>> handler)
Set the handler called when topic partitions are revoked to the consumer- Parameters:
handler
- handler called on revoked topic partitions- Returns:
- current KafkaConsumer instance
-
partitionsAssignedHandler
KafkaConsumer<K,V> partitionsAssignedHandler(Handler<Set<TopicPartition>> handler)
Set the handler called when topic partitions are assigned to the consumer- Parameters:
handler
- handler called on assigned topic partitions- Returns:
- current KafkaConsumer instance
-
seek
Future<Void> seek(TopicPartition topicPartition, long offset)
Overrides the fetch offsets that the consumer will use on the next poll.Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given
completionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will only see messages consistent with the new offset.- Parameters:
topicPartition
- topic partition for which seekoffset
- offset to seek inside the topic partition- Returns:
- a
Future
completed with the operation result
-
seek
Future<Void> seek(TopicPartition topicPartition, OffsetAndMetadata offsetAndMetadata)
Overrides the fetch offsets that the consumer will use on the next poll.Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given
completionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will only see messages consistent with the new offset.- Parameters:
topicPartition
- topic partition for which seekoffsetAndMetadata
- offset to seek inside the topic partition- Returns:
- a
Future
completed with the operation result
-
seekToBeginning
Future<Void> seekToBeginning(TopicPartition topicPartition)
Seek to the first offset for each of the given partition.Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given
completionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will only see messages consistent with the new offset.- Parameters:
topicPartition
- topic partition for which seek- Returns:
- a
Future
completed with the operation result
-
seekToBeginning
Future<Void> seekToBeginning(Set<TopicPartition> topicPartitions)
Seek to the first offset for each of the given partitions.Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given
completionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will only see messages consistent with the new offset.- Parameters:
topicPartitions
- topic partition for which seek- Returns:
- a
Future
completed with the operation result
-
seekToEnd
Future<Void> seekToEnd(TopicPartition topicPartition)
Seek to the last offset for each of the given partition.Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given
completionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will only see messages consistent with the new offset.- Parameters:
topicPartition
- topic partition for which seek- Returns:
- a
Future
completed with the operation result
-
seekToEnd
Future<Void> seekToEnd(Set<TopicPartition> topicPartitions)
Seek to the last offset for each of the given partitions.Due to internal buffering of messages, the record handler will continue to observe messages fetched with respect to the old offset until some time after the given
completionHandler
is called. In contrast, the once the givencompletionHandler
is called thebatchHandler(Handler)
will only see messages consistent with the new offset.- Parameters:
topicPartitions
- topic partition for which seek- Returns:
- a
Future
completed with the operation result
-
commit
Future<Void> commit()
Commit current offsets for all the subscribed list of topics and partition.
-
commit
Future<Map<TopicPartition,OffsetAndMetadata>> commit(Map<TopicPartition,OffsetAndMetadata> offsets)
Commit the specified offsets for the specified list of topics and partitions to Kafka.- Parameters:
offsets
- offsets list to commit
-
committed
Future<OffsetAndMetadata> committed(TopicPartition topicPartition)
Get the last committed offset for the given partition (whether the commit happened by this process or another).- Parameters:
topicPartition
- topic partition for getting last committed offset- Returns:
- a future notified on operation completed
-
partitionsFor
Future<List<PartitionInfo>> partitionsFor(String topic)
Get metadata about the partitions for a given topic.- Parameters:
topic
- topic partition for which getting partitions info- Returns:
- a future notified on operation completed
-
batchHandler
KafkaConsumer<K,V> batchHandler(Handler<KafkaConsumerRecords<K,V>> handler)
Set the handler to be used when batches of messages are fetched from the Kafka server. Batch handlers need to take care not to block the event loop when dealing with large batches. It is better to process records individually using therecord handler
.- Parameters:
handler
- handler called when batches of messages are fetched- Returns:
- current KafkaConsumer instance
-
position
Future<Long> position(TopicPartition partition)
Get the offset of the next record that will be fetched (if a record with that offset exists).- Parameters:
partition
- The partition to get the position for- Returns:
- a future notified on operation completed
-
offsetsForTimes
Future<Map<TopicPartition,OffsetAndTimestamp>> offsetsForTimes(Map<TopicPartition,Long> topicPartitionTimestamps)
Look up the offsets for the given partitions by timestamp. Note: the result might be empty in case for the given timestamp no offset can be found -- e.g., when the timestamp refers to the future- Parameters:
topicPartitionTimestamps
- A map with pairs of (TopicPartition, Timestamp).- Returns:
- a future notified on operation completed
-
offsetsForTimes
Future<OffsetAndTimestamp> offsetsForTimes(TopicPartition topicPartition, Long timestamp)
Look up the offset for the given partition by timestamp. Note: the result might be null in case for the given timestamp no offset can be found -- e.g., when the timestamp refers to the future- Parameters:
topicPartition
- TopicPartition to query.timestamp
- Timestamp to be used in the query.- Returns:
- a future notified on operation completed
-
beginningOffsets
Future<Map<TopicPartition,Long>> beginningOffsets(Set<TopicPartition> topicPartitions)
Get the first offset for the given partitions.- Parameters:
topicPartitions
- the partitions to get the earliest offsets.- Returns:
- a future notified on operation completed
-
beginningOffsets
Future<Long> beginningOffsets(TopicPartition topicPartition)
Get the first offset for the given partitions.- Parameters:
topicPartition
- the partition to get the earliest offset.- Returns:
- a future notified on operation completed
-
endOffsets
Future<Map<TopicPartition,Long>> endOffsets(Set<TopicPartition> topicPartitions)
Get the last offset for the given partitions. The last offset of a partition is the offset of the upcoming message, i.e. the offset of the last available message + 1.- Parameters:
topicPartitions
- the partitions to get the end offsets.- Returns:
- a future notified on operation completed
-
endOffsets
Future<Long> endOffsets(TopicPartition topicPartition)
Get the last offset for the given partition. The last offset of a partition is the offset of the upcoming message, i.e. the offset of the last available message + 1.- Parameters:
topicPartition
- the partition to get the end offset.- Returns:
- a future notified on operation completed
-
asStream
KafkaReadStream<K,V> asStream()
- Returns:
- underlying the
KafkaReadStream
instance
-
pollTimeout
KafkaConsumer<K,V> pollTimeout(java.time.Duration timeout)
Sets the poll timeout for the underlying native Kafka Consumer. Defaults to 1000ms. Setting timeout to a lower value results in a more 'responsive' client, because it will block for a shorter period if no data is available in the assigned partition and therefore allows subsequent actions to be executed with a shorter delay. At the same time, the client will poll more frequently and thus will potentially create a higher load on the Kafka Broker.- Parameters:
timeout
- The time, spent waiting in poll if data is not available in the buffer. If 0, returns immediately with any records that are available currently in the native Kafka consumer's buffer, else returns empty. Must not be negative.
-
poll
Future<KafkaConsumerRecords<K,V>> poll(java.time.Duration timeout)
Executes a poll for getting messages from Kafka.- Parameters:
timeout
- The maximum time to block (must not be greater thanLong.MAX_VALUE
milliseconds)- Returns:
- a future notified on operation completed
-
-