Skip to main content

Guidelines for implementing chunked upload to Cloudinary

Comments

12 comments

  • Han BaHwan

    How can I issue `X-Unique-Upload-Id`?
    What's Content-Range format?
    If I understand correctly, this is below:
    `Content-Range: 0-99999/2000000`
    `Content-Range: 100000-199999/2000000`
    `Content-Range: 200000-299999/2000000`
    Right?

    And how can I handle errors? It looks exception can happen.

    0
  • Nadav Ofir

    For `X-Unique-Upload-Id` you can use any random/unique string generator, for example Unix timestamp.
    `Content-Range` - must contain a preceding "bytes".
    Error messages (bad responses) may arise in case of upload failure. Handling it depends on your use case and the HTTP client you're using.

    0
  • Nadav Ofir

    Han, I've just noticed that the content-range header directions were missing the necessary "bytes" prefix. This is now fixed in the above explanation.

    0
  • Mohamed Habashy

    Hi Everyone,

    I think there is a additional restriction to the mentioned above by Nadav.

    "First chunk must be first. meaning it must arrive the server before any other chunks."

    Then you can do bulk upload the remaining chunk except the last as mentioned above.

    Kind regards,
    Mohamed

    0
  • Geoff Plitt

    We are using react-native-fetch-blob:

    RNFetchBlob.fetch(

    'POST',
    upload_url,
    {
    'Content-Type':'multipart/form-data',
    'Transfer-Encoding':'Chunked' // <-- CHUNKED header
    },
    [{
    name:'file',
    filename:'upload.mp4',
    type:'video/mp4',
    data:RNFetchBlob.wrap(uri),
    },
    {name: 'timestamp', data: timestamp },
    {name: 'signature', data: signature},
    {name: 'eager', data: 'sp_full_hd_wifi/m3u8'},
    {name: 'eager_async', data: 'true'},
    {name: 'api_key', data: cloudinary_config.api_key}
    ]
    )
     
     
    We got a successful response from Cloudinary.  How do we know whether the upload was actually chunked or just a normal upload?  (I'm using the header marked above as per the documentation but I'm don't know whether it is actually chunking correctly for Cloudinary)
     
    0
  • Raya Straus

    Hi Geoff,

    If you open a ticket at support@cloudinary.com with more information on your request, we can take a look at our logs. Another option would be to examine the network tab while the upload is being performed. If the upload is chunked you'll be able to see multiple requests going through. 

    0
  • leon

    I have trouble implementing this from the browser in javascript. I got the error : 

    { message"Chunk size doesn't match upload size: 6000000 - 8000037" }

    I create my chunks using

    initialFileAsBlob.slice(0, 6000000) 

    to create chunk of 6MB, then I read the chunk with

    chunkToSend = readAsDataUrl(chunk) 

    and send it (with signature) and the following headers : 


    'X-Unique-Upload-Id': 'uniqueid.mp4',
    'Content-Range': 'bytes 0-6000000/17500142

    0
  • Shirly Manor

    It might be because the chunk that you specify is off by 1. when specifying 0-6000000 its mean 6000001.

    If that doesn't help, can you please open a support ticket at support@cloudinary.com for further investigations.

     

    0
  • Roni Yosofov

    Thank you for the guide!
    does Cloudinary also save them in chunks? e.g. do they get served in chunks from the CDN?

    0
  • Eric Pasos

    Hi Roni,

    The file will be spliced into chunks while uploading to your Media Library account and once the last chunk is uploaded, all the chunked files will be put together and will result back into a single file. Hence, you will only see one file in your account and a single file will also be delivered to CDN when accessed.

    Hope this helps.

    0
  • Zsolt Siklosi

    Hi,

    Is there any restriction on the maximum time delay between 2 chunk uploads? I am unable to get the final done:true response even though I'm sending the correct headers (size only in the last request). What conditions need to be met to signal the last part?

    Thanks in advance!

    0
  • Stephen Doyle

    Hi Zsolt,

    We recommend that all chunks are sent as quickly as possible after the first, but it should be possible for the last chunk to arrive up to 24 hours after the first, because we delete 'orphan' chunks periodically, removing anymore more than 24 hours old.

    If you're seeing some other issue with a chunked upload, we generally recommend using one of our SDKs to send the requests, but if you're having any problems with your own implementation, you can contact us directly using the "Submit a request" link here on our support site, and we'll assist directly via a ticket
    Regards,

    Stephen

    0

Please sign in to leave a comment.