Can I purge the cache of all transformed images if i don't know their URLs?

Comments

7 comments

  • Avatar
    Eric Pasos

    Hi Nick,

    When an asset has been deleted (also destroyed), renamed, or overwritten from your Cloudinary storage, any delivered versions of the asset including either the original or derived version can remain cached on the CDN servers for up to 30 days. To purge or remove the cached copies, you can send an 'invalidation' request to the CDN server to remove cached copies of the asset, using the destroy method of the Upload API for example (i.e. using Python SDK):

    cloudinary.uploader.destroy('sample', invalidate = 'true')

    Where the public_id=sample is the original asset, and the code above will invalidate CDN cached copies of the asset (and all its transformed versions automatically and there is no need to indicate the URL of each derived asset) and it usually takes between a few seconds and a few minutes for the invalidation to fully propagate through the CDN (see https://cloudinary.com/documentation/managing_assets#invalidating_cached_media_assets_on_the_cdn)

    Hope this helps, please let me know if you have any further questions.

    Best regards,

    Eric

    1
    Comment actions Permalink
  • Avatar
    Nick Medrano

    That is exactly what I've been looking for. Thanks!

    0
    Comment actions Permalink
  • Avatar
    Nick Medrano

    Sorry, Eric, one more question: Can I still do the above logic if the images are hosted at Google Cloud Storage? 

    0
    Comment actions Permalink
  • Avatar
    Stephen Doyle

    Hi Nick,

    The example Eric provided needs to you to pass the public_id of the image that you want to be deleted from your Cloudinary account.

    If you uploaded files to Cloudinary from Google Cloud Storage, most likely that was by using our API ( in which case the upload API response contains the public_id for the file you uploaded, and it's part of the delivery URL for that asset in Cloudinary), or by using auto-upload, in which case the public_id can be determined based on the name of the folder you used with the auto-upload mapping, and the remaining path and filename from your Google Cloud Storage bucket, or retrieved from our Admin API or Search API

    You could also be using our fetch remote image URL feature, in which case the public_id is the full remote URL that you asked us to fetch (e.g. the part after /image/fetch/<transformations>/<version>/ in the URL)
    Regards,

    Stephen

    1
    Comment actions Permalink
  • Avatar
    Nick Medrano

    @stephen OK, the latter fetch remote image URL is probably what I am going for because I don't plan to migrate the storage files from Google Storage to Cloudinary. I just wanted to use Cloudinary to handle the transformations and caching. Looks like the public_id will be the full remote URL as you stated....so I'll give this a try when able. If I have issues I'll make a new post for that. Thanks, guys!

    0
    Comment actions Permalink
  • Avatar
    Nick Medrano

    Hi @Stephen, 

     When looking at the remote fetch option, I am trying to first at least get it working. I copied the demo URL and replaced the 'demo' cloudname with my cloud name. Unfortunately, it does not work when I use my cloud name. What am I doing wrong?

    Update: NEvermind, it works. I had to urlencode the URL. 

    0
    Comment actions Permalink
  • Avatar
    Stephen Doyle

    Hi Nick,

    Thanks for the update - you shouldn't typically need to URLencode the URL, (unless there are query string parameters needed for the remote server to reply to us with the correct image), but I'm glad it's working for you now.
    If there's anything else we can help with, please let me know

    Thanks,

    Stephen

    0
    Comment actions Permalink

Please sign in to leave a comment.