Uploaded image for project: 'Kafka'
  1. Kafka
  2. KAFKA-5456

Producer fails with NPE if compressed V0 or V1 record is larger than batch size

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Blocker
    • Resolution: Fixed
    • 0.11.0.0
    • 0.11.0.0
    • producer
    • None

    Description

      If a record exceed the producer's configure batch size on send(), producer fails with a NPE

      From mailing list:

      java.lang.NullPointerException

      at org.apache.kafka.common.utils.Utils.notNull(Utils.java:243)
      at
      org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:219)
      at
      org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:650)
      at
      org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:604)
      at
      org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:97)
      at
      org.apache.kafka.streams.state.internals.StoreChangeLogger.logChange(StoreChangeLogger.java:59)
      at
      ...

      The NPE is not very helpful and we should have a proper exception type and meaningful exception message that points the user to the config they need to change to fix the problem. We also need to add a test for this.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            hachikuji Jason Gustafson
            mjsax Matthias J. Sax
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment