Description
This Article provides additional Details when a Source connector fails with the below Error:
ERROR [$ConnectorName|task-*] WorkerSourceTask{id=$ConnectorName-$TaskNumber} failed to send
record to $TopicName: (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask:418)org.apache.kafka.common.errors.RecordTooLargeException:
The request included a message larger than the max message size the server will accept.
WITH ERROR MESSAGES AS BELOW
- ERROR MESSAGE: “The message is larger than the size the topic is configured for. Please increase the value of the max.message.bytes configuration of the topic to allow for larger messages.”
- ERROR MESSAGE: “The message size is larger than the value of the max.request.size configuration.”
Applies To
Source Connectors
Cause
Source connectors in Confluent Cloud
Error Message: "The message is larger than the size the topic is configured for. Please increase the value of the max.message.bytes configuration of the topic to allow for larger messages."
This error occurs when the size of the message being sent to the topic exceeds the configured maximum message size (max.message.bytes
) for that topic.
Error Message: "The message size is larger than the value of the max.request.size configuration."
max.request.size
configuration, which limits the maximum size of a request in bytes.
Source connectors on-prem
You are reaching the broker setting for message.max.bytes.
Resolution
Error Message: "The message is larger than the size the topic is configured for. Please increase the value of the "max.message.bytes" configuration of the topic to allow for larger messages."
Solution: Increase the "max.message.bytes" configuration for the topic. This can be done through the Confluent Cloud UI or by using the Confluent CLI. The default value is 2MB, but it can be increased up to 8MB on Standard and Basic clusters, and on Dedicated clusters, up to 20MB. Please refer the document of Configuration Reference for Topics in Confluent Cloud
Please note that max.message.bytes can only be increased at a topic level, and will overwrite the cluster default value for "max.message.bytes".
- It can be modified via CLI :
confluent kafka topic update <topic_name> --cluster <lkc id> --config max.message.bytes=<>
- It can also be edited using Edit Settings via the Confluent Cloud UI
Error Message: "The message size is larger than the value of the "max.request.size" configuration."
Solution: Increase the "max.request.size" configuration. This setting limits the number of record batches the producer will send in a single request to avoid sending huge requests, however it can be increased as needed.
It can be increased upto 100MB in accordance to the requirement.
Please reach out to Confluent Support for the assistance as they can help to update and increase the value of "max.request.size" internally.
- If you want to run a source connector making requests larger than 8 MB, you must run the connector in a Dedicated cluster, please refer here for more details
- NOTE: Please refer to the Cluster Limit Comparisons for detailed understanding.
Source connectors on-prem
This limitation / property can be updated dynamically, and is further explained in How to handle large messages with Kafka