I just noticed something with the Cache method used in Olympus that I'm not sure if that may be a problem on high loaded boards.
As far as can see, this is how it works:
Code: Select all
1: IF get from cache THEN 2: return cached data 3: ENDIF 4: perform query 5: put data into cache 6: return data
1) visitor A reaches line 1, cache is not found or expired.
2) visitor A performs the query.
3) visitor B reaches line 1, but cache has not been updated by visitor A yet.
4) visitor B performs the same query.
5) visitor C, D, and so on may follow the same road than visitor B.
I'm wondering if there's anything that could be done to prevent more than one visitor triggering the same query (more than one visitor creating the cache file at the same time). Maybe using some kind of locking? ...anyone had problems with this caching algorithm?
Anyway, caching still provides a nice performance boost, as per the monitoring reports. I have just implemented this method on a phpBB2 based board that has a constant rate of 400 to 600 users online (the tipical number that appears on forum index, past 5 minutes). Still I'm wondering if it would worth to put something to detect how many times it happens that more than process does the same job.
Have I missed something and that could not really happen?