There are a variety of ways to migrate your existing images to Cloudinary, both using the Media Library UI and the API, as well as using Cloudinary's SDKs and other features.
Using Cloudinary's REST API
Cloudinary's upload API supports a single file upload per API call, but can be called multiple times in parallel. As such, your application can use multiple threads to upload many files in a short term.
We recommend that such multi-threaded operations start by using approximately 10 simultaneous requests, and that you slowly increase the rate if necessary. If you reach account concurrency limits and receive an HTTP 420 error, you should reduce the number of simultaneous requests, and retry the failed request(s).
To speed up the upload process, you can provide a file's existing HTTP/HTTPS, S3, or Google Cloud Storage URL as the "file" parameter to the upload API method instead of sending the actual data. This allows for a much faster migration process because we can retrieve the images from the specified location instead of your code needing to download the file from the existing storage and then upload it to our API: https://cloudinary.com/documentation/upload_images#file_source_options
To specify an S3 or Google Cloud Storage URL, please ensure that the bucket can be accessed by Cloudinary and that you've specified which Cloudinary accounts should be allowed to copy images from the bucket. More info can be found in the Upload from a private storage URL (Amazon S3 or Google Cloud) section of the upload API documentation.
SDK Methods (Bulk Upload with Contextual Metadata)
Cloudinary offers a variety of SDKs that can be used to bulk upload assets.
In the example below the Ruby SDK is used to bulk upload and add contextual metadata to the corresponding asset. See the process below:
- Ruby Environment setup
- Fork from the Cloudinary Ruby-SDK-Quickstart documentation
- Set up Cloudinary configuration in the config.rb file. Credentials can be found on the Cloudinary account dashboard.
- Use the listfilenames.rb file to list the file names and their given descriptions in the test.csv file. A hash was used as an example to map the descriptions.
- Finally, use the bulkupload.rb file to implement multiple threads to bulk upload the assets with their respective contextual metadata.
If you are using Ruby, Cloudinary's Ruby GEM includes a Migrator tool for managing the migration process for you automatically.
Command line interface
The Cloudinary CLI provides an interface to call the API for common use-cases, and for bulk uploads there are several File Management commands, such as 'migrate' to upload a list of external media URLs, 'upload_dir' to upload a local folder to Cloudinary, or 'sync' to synchronize a local folder with your account
Using the Media Library UI to upload multiple files
The Media Library upload widget allows selecting multiple files at once, including dragging and dropping folders onto the Media Library window:
The Media Library uses the same upload API methods mentioned above so it is not rate limited as well.
Automatic migration of files from existing storage
Auto-upload remote resources - This feature allows you to link a folder in your Cloudinary account with a corresponding folder on your existing server or in an existing Amazon S3 bucket or Google Cloud Storage bucket. This would upload the images lazily (i.e., every time they are specifically called by the end-user).
Hi Orly !
I'm using Node.js , How can I Upload all the images in a folder??
I'm in a project where the client should be able to dump all the images inside a folder
When uploading directly from the browser, you can set the 'multiple' attribute and allow drag & drop of multiple files. On the other hand server-side upload API requires uploading one file at the time.
Feel free to open a support ticket and share more information regarding your specific use-case so we can advise further.
Is it possible to add Azure Blobs or Storage accounts the say way S3 works?
It's on our road map to support other sources for our auto upload feature.
In the meantime, it's possible to auto upload from any publicly available remote location (i.e., http/https).
This seems a bit like a hole in your otherwise outstanding and unique offering. Your clients might have directories with a LOT of images. I have found that the initial bulk uploading is exceptionally painful.
Ideally one could drag and drop a tgz or zip file into Cloudinary, which then extracts and stores the images. Doesn't seem too hard to do? Accept .tgz and zip files which get extracted?
Secondly, your API should ideally also accept tgz / zip files to extract and store.
Thank you for this suggestion, I will open a feature request on your behalf.
here ia a tutorial for multiple image with cloudinary and node
Thank you very much for sharing this :)
Please sign in to leave a comment.