Stale Staistics instead of current ones

Silver Adventurer

Stale Staistics instead of current ones

Hi Folks,

             i have error in a logfile,if anyone faced this issue please give proper solutions,thnaks



using stale statistics instead of current ones because stats collector is not responsing..



 on that time database runing the bulk insert operations.


Archived Discussions

Effective March 31st, we will no longer engage on PostgresRocks.

How to engage with us further?

  • Thought Leadership: EDB Blogs

  • Tips and Tricks: Postgres Tutorials

  • Customer Support: Create a Case Please note: Only customers with an active EDB support subscription and support portal authorization can create support ticket

  • Engage on Stackoverflow While engaging on Stackoverflow tag the question with EDB or EnterpriseDB.

EDB Team Member

Re: Stale Staistics instead of current ones

Hi Din2pg,


PostgreSQL tracks 'runtime statistics'  which is  maintained by a separate process known as stats collector. When a backed process/transactions requests some of the stats it requests a recent snapshot of the file from the collector.


As there is bulk inserts going on on the database, the log message you see means that the collector did not handle such requests fast enough, and the backend decided to read an older snapshot instead. This may also easily happen if the I/O system is overloaded. 


Also, One of the reason for this message to occur in the logs is when the process is unable to get the required I/O to write the statistics to "stats_temp_location". 

Following are some of the solutions you can prefer :

  • Verify the permissions of the directory containing statistics '/pg_stat_tmp'.
  • Considering moving the statistics file to RAM disk (e.g. tmpfs mount on Linux) using stats_temp_directory in postgresql.conf. The space needed depends on the number of objects (databases, tables, indexes).
  • As you mentioned you are perfiorming bulk loads, consider tuning few parameters at session level while initating the bulk inserts. Refer following link to improve bulk inserts performance :  (This link for verison 10 , adjust according to your database version)
  • If all the above solutions do not help ,the last option you can consider is to increase RAM and tune parameters according to the new RAM size.

NOTE : This is not a severe error, your queries will execute without any issues, but may be using less than optimal execution plans because they did not see an update to table statistics in a while.

© 2019 EnterpriseDB Corporation. All rights reserved.   |   Privacy Policy   |  Terms of Use   |   Trademarks