Packages

class KafkaProducer[K, V] extends WriteStream[KafkaProducerRecord[K, V]]

Vert.x Kafka producer.

The provides global control over writing a record.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. KafkaProducer
  2. WriteStream
  3. StreamBase
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new KafkaProducer(_asJava: AnyRef)(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[K], arg1: scala.reflect.api.JavaUniverse.TypeTag[V])

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def asJava: AnyRef
    Definition Classes
    KafkaProducerWriteStreamStreamBase
  6. def clone(): AnyRef
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @native() @throws( ... )
  7. def close(timeout: Long, completionHandler: Handler[AsyncResult[Unit]]): Unit

    Close the producer * @param timeout timeout to wait for closing

    Close the producer * @param timeout timeout to wait for closing

    completionHandler

    handler called on operation completed

  8. def close(completionHandler: Handler[AsyncResult[Unit]]): Unit

    Close the producer * @param completionHandler handler called on operation completed

  9. def close(): Unit

    Close the producer

  10. def closeFuture(timeout: Long): Future[Unit]

    Like close but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  11. def closeFuture(): Future[Unit]

    Like close but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  12. def drainHandler(handler: Handler[Unit]): KafkaProducer[K, V]

    Set a drain handler on the stream.

    Set a drain handler on the stream. If the write queue is full, then the handler will be called when the write queue is ready to accept buffers again. See io.vertx.scala.core.streams.Pump for an example of this being used.

    The stream implementation defines when the drain handler, for example it could be when the queue size has been reduced to maxSize / 2. * @param handler the handler

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaProducerWriteStream
  13. def end(arg0: Handler[AsyncResult[Unit]]): Unit

    Same as io.vertx.scala.core.streams.WriteStream#end but with an handler called when the operation completes

    Same as io.vertx.scala.core.streams.WriteStream#end but with an handler called when the operation completes

    Definition Classes
    KafkaProducerWriteStream
  14. def end(): Unit

    Ends the stream.

    Ends the stream.

    Once the stream has ended, it cannot be used any more.

    Definition Classes
    KafkaProducerWriteStream
  15. def end(data: KafkaProducerRecord[K, V], handler: Handler[AsyncResult[Unit]]): Unit

    Same as but with an handler called when the operation completes

    Same as but with an handler called when the operation completes

    Definition Classes
    KafkaProducerWriteStream
  16. def end(data: KafkaProducerRecord[K, V]): Unit

    Same as io.vertx.scala.core.streams.WriteStream#end but writes some data to the stream before ending.

    Same as io.vertx.scala.core.streams.WriteStream#end but writes some data to the stream before ending. * @param data the data to write

    Definition Classes
    KafkaProducerWriteStream
  17. def endFuture(data: KafkaProducerRecord[K, V]): Future[Unit]

    Like end but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

    Like end but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

    Definition Classes
    KafkaProducerWriteStream
  18. def endFuture(): Future[Unit]

    Like end but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

    Like end but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

    Definition Classes
    KafkaProducerWriteStream
  19. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  20. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  21. def exceptionHandler(handler: Handler[Throwable]): KafkaProducer[K, V]

    Set an exception handler on the write stream.

    Set an exception handler on the write stream. * @param handler the exception handler

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaProducerWriteStreamStreamBase
  22. def finalize(): Unit
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  23. def flush(completionHandler: Handler[Unit]): KafkaProducer[K, V]

    Invoking this method makes all buffered records immediately available to write * @param completionHandler handler called on operation completed

    Invoking this method makes all buffered records immediately available to write * @param completionHandler handler called on operation completed

    returns

    current KafkaProducer instance

  24. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  25. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  26. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  27. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  28. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  29. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  30. def partitionsFor(topic: String, handler: Handler[AsyncResult[Buffer[PartitionInfo]]]): KafkaProducer[K, V]

    Get the partition metadata for the give topic.

    Get the partition metadata for the give topic. * @param topic topic partition for which getting partitions info

    handler

    handler called on operation completed

    returns

    current KafkaProducer instance

  31. def partitionsForFuture(topic: String): Future[Buffer[PartitionInfo]]

    Like partitionsFor but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  32. def send(record: KafkaProducerRecord[K, V], handler: Handler[AsyncResult[RecordMetadata]]): KafkaProducer[K, V]

    Asynchronously write a record to a topic * @param record record to write

    Asynchronously write a record to a topic * @param record record to write

    handler

    handler called on operation completed

    returns

    current KafkaWriteStream instance

  33. def send(record: KafkaProducerRecord[K, V]): KafkaProducer[K, V]

    Asynchronously write a record to a topic * @param record record to write

    Asynchronously write a record to a topic * @param record record to write

    returns

    current KafkaWriteStream instance

  34. def sendFuture(record: KafkaProducerRecord[K, V]): Future[RecordMetadata]

    Like send but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  35. def setWriteQueueMaxSize(i: Int): KafkaProducer[K, V]

    Set the maximum size of the write queue to maxSize.

    Set the maximum size of the write queue to maxSize. You will still be able to write to the stream even if there is more than maxSize items in the write queue. This is used as an indicator by classes such as Pump to provide flow control.

    The value is defined by the implementation of the stream, e.g in bytes for a io.vertx.scala.core.net.NetSocket, the number of io.vertx.scala.core.eventbus.Message for a io.vertx.scala.core.eventbus.MessageProducer, etc... * @param maxSize the max size of the write stream

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaProducerWriteStream
  36. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  37. def toString(): String
    Definition Classes
    AnyRef → Any
  38. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  39. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  40. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @throws( ... )
  41. def write(data: KafkaProducerRecord[K, V], handler: Handler[AsyncResult[Unit]]): KafkaProducer[K, V]

    Same as but with an handler called when the operation completes

    Same as but with an handler called when the operation completes

    Definition Classes
    KafkaProducerWriteStream
  42. def write(kafkaProducerRecord: KafkaProducerRecord[K, V]): KafkaProducer[K, V]

    Write some data to the stream.

    Write some data to the stream. The data is put on an internal write queue, and the write actually happens asynchronously. To avoid running out of memory by putting too much on the write queue, check the io.vertx.scala.core.streams.WriteStream#writeQueueFull method before writing. This is done automatically if using a io.vertx.scala.core.streams.Pump. * @param data the data to write

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaProducerWriteStream
  43. def writeFuture(data: KafkaProducerRecord[K, V]): Future[Unit]

    Like write but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

    Like write but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

    Definition Classes
    KafkaProducerWriteStream
  44. def writeQueueFull(): Boolean

    This will return true if there are more bytes in the write queue than the value set using io.vertx.scala.core.streams.WriteStream#setWriteQueueMaxSize * @return true if write queue is full

    This will return true if there are more bytes in the write queue than the value set using io.vertx.scala.core.streams.WriteStream#setWriteQueueMaxSize * @return true if write queue is full

    Definition Classes
    KafkaProducerWriteStream

Inherited from WriteStream[KafkaProducerRecord[K, V]]

Inherited from StreamBase

Inherited from AnyRef

Inherited from Any

Ungrouped