HTTP download downloading old file, but why, how?

We just updated a file on our (‘new’) server.

In an app we have
DeleteURLCacheEntryw(PChar(url)) ;
UrlDownloadToFile(nil, PChar(url), PChar(‘x.y’), 0, nil) = 0 ;
and the url is for a text file beginning ‘http’
For some reason the old version of the file is being downloaded.
If you put the URL into a browser you get the new file.
If you change the app to use ‘https’ you get the new file.
If you then change the app back to use ‘http’, you continue to get the new file. (on ‘this computer’)

Please explain what is going on. Extra points for an explanation of how this can be fixed without changing the app (due to a confounding related issue, after all it worked on our ‘old’ server)

You could try the code calls in a new program and check the returns from
DeleteURLCacheEntryw(PChar(url)) ;

Return value

Returns TRUE if successful, or FALSE otherwise. To get extended error information, call GetLastError. Possible error values include the following.

Return code Description
ERROR_ACCESS_DENIED The file is locked or in use. The entry is marked and deleted when the file is unlocked.
ERROR_FILE_NOT_FOUND The file is not in the cache.

I suspect the caching is occurring in Edge or maybe even Old Internet Explorer code on the client. Microsoft invented their own rules for caching. Changing the app to use ‘https’ is a new url so new cache copy not sure why it updates the http version, are you sure you did not slightly change the format of the “httP” url.

When you say
“If you then change the app back to use ‘http’, you continue to get the new file. (on ‘this computer’)”
Is the problem fixed (on “this” computer) or do you just have a new cached copy which does not change when the text file changes again?

In the past I think I have had troubles with the actual HTTP server not checking the date on a .txt file and continually delivering an old copy. There maybe server settings to adjust this but I ditched IIS.

OK, so I think I’ve worked it out now, as can be seen from this sequence

ListCache file not there (FindFirstUrlCacheEntry, FindNextUrlCacheEntry)

Delete+Download(http) Delete returns FALSE, but syserrormessage is “The operation completed successfully”

ListCache entry says “https://x/x/x/x/x” (because server forced/forces https!?)

Delete (http) Returns True

ListCache Https entry sill there!

Delete (https) Returns True

ListCache Not there

So it’s a bug in the way Windows cache is ‘matching’ files, seems happy to return a file matched ignoring https/http prefix, but only deletes specifically http or https

Wonder if it will ever get fixed?

You’re not meant to use plain http:// anymore. A quick google search, and the link below suggests adding a random number to the end of the url like ?234324. This forces a new file to download every time it is called.

Well, it’s historical, so that’s why it’s http going forward we will change the code to https

Love the idea of adding a random number, seems like the workaround for a bug is to exploit another bug, clever.

But still, if you have the time to add the random number, why not just change it to https.

Actually that link is for a different problem, scary to think that anyone would use it as it circumvents the usage of cache full stop.

Web Developers use the random number idea on website sometimes. They setup the server to give long expiry times and when they change some of the page resources (images, js, etc), they generate a new number that they put in the html. In this way you can have the benefits of caching and cause refreshes as soon as they are needed.

So you could just store the number you used… and then change it if you need the latest version, allowing you to still benefit from the cache.

1 Like

The query string idea is really common for cache busting - quite often it will be a hash of the file - so if the file ever changes on the server the hash will change. We do this with js and css files in Continua.