When implementing chunked upload, please make sure to follow these guidelines:
- You must include the
X-Unique-Upload-Id
header, and make sure it is identical (and unique) for all parts. - Every part must contain a
Content-Range
header, which is applicable to the location of the chunk in the overall file (Format: 'bytes #start-#end/#total', e.g. 'bytes 0-5999999/22744222').- If the total size is unknown, set it to
-1
, except for the last one.
- If the total size is unknown, set it to
- Each chunk must be larger than 5MB except for the last one.
Here's how we implemented this in one of our Ruby libraries:
https://github.com/cloudinary/cloudinary_gem/blob/a738f8c57ef452ea75c924f806fb05e9682749c5/lib/cloudinary/uploader.rb#L96
Attached is a code example that uploads a file using chunked upload. Before running the code, please enter your cloud_name, upload_preset and public_id in the index.js file.
Comments
12 comments
How can I issue `X-Unique-Upload-Id`?
What's Content-Range format?
If I understand correctly, this is below:
`Content-Range: 0-99999/2000000`
`Content-Range: 100000-199999/2000000`
`Content-Range: 200000-299999/2000000`
Right?
And how can I handle errors? It looks exception can happen.
For `X-Unique-Upload-Id` you can use any random/unique string generator, for example Unix timestamp.
`Content-Range` - must contain a preceding "bytes".
Error messages (bad responses) may arise in case of upload failure. Handling it depends on your use case and the HTTP client you're using.
Han, I've just noticed that the content-range header directions were missing the necessary "bytes" prefix. This is now fixed in the above explanation.
Hi Everyone,
I think there is a additional restriction to the mentioned above by Nadav.
"First chunk must be first. meaning it must arrive the server before any other chunks."
Then you can do bulk upload the remaining chunk except the last as mentioned above.
Kind regards,
Mohamed
We are using react-native-fetch-blob:
RNFetchBlob.fetch(
Hi Geoff,
If you open a ticket at support@cloudinary.com with more information on your request, we can take a look at our logs. Another option would be to examine the network tab while the upload is being performed. If the upload is chunked you'll be able to see multiple requests going through.
I have trouble implementing this from the browser in javascript. I got the error :
{ message: "Chunk size doesn't match upload size: 6000000 - 8000037" }
I create my chunks using
initialFileAsBlob.slice(0, 6000000)
to create chunk of 6MB, then I read the chunk with
chunkToSend = readAsDataUrl(chunk)
and send it (with signature) and the following headers :
'X-Unique-Upload-Id': 'uniqueid.mp4',
'Content-Range': 'bytes 0-6000000/17500142
It might be because the chunk that you specify is off by 1. when specifying 0-6000000 its mean 6000001.
If that doesn't help, can you please open a support ticket at support@cloudinary.com for further investigations.
Thank you for the guide!
does Cloudinary also save them in chunks? e.g. do they get served in chunks from the CDN?
Hi Roni,
The file will be spliced into chunks while uploading to your Media Library account and once the last chunk is uploaded, all the chunked files will be put together and will result back into a single file. Hence, you will only see one file in your account and a single file will also be delivered to CDN when accessed.
Hope this helps.
Hi,
Is there any restriction on the maximum time delay between 2 chunk uploads? I am unable to get the final done:true response even though I'm sending the correct headers (size only in the last request). What conditions need to be met to signal the last part?
Thanks in advance!
Hi Zsolt,
We recommend that all chunks are sent as quickly as possible after the first, but it should be possible for the last chunk to arrive up to 24 hours after the first, because we delete 'orphan' chunks periodically, removing anymore more than 24 hours old.
If you're seeing some other issue with a chunked upload, we generally recommend using one of our SDKs to send the requests, but if you're having any problems with your own implementation, you can contact us directly using the "Submit a request" link here on our support site, and we'll assist directly via a ticket
Regards,
Stephen
Please sign in to leave a comment.