cancel
Showing results for 
Search instead for 
Did you mean: 

memory usage is high and OOM is occured

SOLVED
Level 3 Traveller

memory usage is high and OOM is occured

when I use EDB9.5,memory usage is high and OOM is occured.EDB process used over 90% memory.

postgresql.conf is below:

shared_buffers = 2048MB             # min 128kB

#work_mem = 4MB                     # min 64kB

max_connections = 1000

 

when I used showall,work_mem is 56688kB .

shared_buffers                      | 2GB

work_mem                            | 56688kB   

max_connections                     | 1000

 

I want to know, OOM have relation with work_mem?  need to set work_mem? How suitable is work_mem size?

 

2 ACCEPTED SOLUTIONS

Accepted Solutions
EDB Team Member

Re: memory usage is high and OOM is occured


@mark wrote:

when I use EDB9.5,memory usage is high and OOM is occured.EDB process used over 90% memory.

postgresql.conf is below:

shared_buffers = 2048MB             # min 128kB

#work_mem = 4MB                     # min 64kB

max_connections = 1000

 

when I used showall,work_mem is 56688kB .

shared_buffers                      | 2GB

work_mem                            | 56688kB   

max_connections                     | 1000

 

I want to know, OOM have relation with work_mem?  need to set work_mem? How suitable is work_mem size?

 


It could.  Are you really establishing 1000 connections?  The server will allocate at least 1x work_mem for each connection that is established (and maybe more depenending on whether the SQL you execute uses ORDER BY clauses, or causes merge/hash joins). 

 

How much memory is allocated for this server?

 

The suitability of the current setting of work_mem depends a lot on how much memory is allocated to your system, and how many active connections you expects.  If the value is too small, sorting operations will spill out to the disc, and your performance on those operations will be adversely affected.  You can turn on loggin for temp file creation by setting log_temp_files (see https://www.postgresql.org/docs/9.5/static/runtime-config-logging.html for details).

EDB Team Member

Re: memory usage is high and OOM is occured

Hi Mark,

 

You can also set the work_mem at the session level if that is the concern over here as it may be possible that the queries in your environment may be using extensive joins and sorting which may require more work_mem.

 

set work_mem='20MB'

 
5 REPLIES 5
EDB Team Member

Re: memory usage is high and OOM is occured


@mark wrote:

when I use EDB9.5,memory usage is high and OOM is occured.EDB process used over 90% memory.

postgresql.conf is below:

shared_buffers = 2048MB             # min 128kB

#work_mem = 4MB                     # min 64kB

max_connections = 1000

 

when I used showall,work_mem is 56688kB .

shared_buffers                      | 2GB

work_mem                            | 56688kB   

max_connections                     | 1000

 

I want to know, OOM have relation with work_mem?  need to set work_mem? How suitable is work_mem size?

 


It could.  Are you really establishing 1000 connections?  The server will allocate at least 1x work_mem for each connection that is established (and maybe more depenending on whether the SQL you execute uses ORDER BY clauses, or causes merge/hash joins). 

 

How much memory is allocated for this server?

 

The suitability of the current setting of work_mem depends a lot on how much memory is allocated to your system, and how many active connections you expects.  If the value is too small, sorting operations will spill out to the disc, and your performance on those operations will be adversely affected.  You can turn on loggin for temp file creation by setting log_temp_files (see https://www.postgresql.org/docs/9.5/static/runtime-config-logging.html for details).

Adventurer

Re: memory usage is high and OOM is occured

Thanks for your replay.

 

server memory is 16GB.

 

select count(*) from pg_stat_activity;

count(*) is 127。

 

I confused is :

delete clause used a lot of memory .

delete clause is below:

DELETE  FROM test WHERE testid in (select r_id from test1 where p_id_id in ( select re_id from ap_en where link = $1))

delete from  test  where test1_id = $1 AND id = $2

 

Adventurer

Re: memory usage is high and OOM is occured

delete clause is using select condition.It can make memory useage high.

I used test data to EXPLAIN (ANALYZE, VERBOSE, BUFFERS).

 Memory Usage exceed work_mem size. What should I do to deal with this problem?

 

work_mem
----------
 4MB
(1 row)

 

 EXPLAIN  information is below:

LOG:  duration: 404462.266 ms  statement: EXPLAIN (ANALYZE, VERBOSE, BUFFERS) delete from test01 where id in (select id from test01 where id in (select id from test01 ) );
                                                                                  QUERY PLAN

-------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------
 Delete on public.test01  (cost=601224.31..903307.34 rows=147538 width=18) (actual time=403664.050..403664.050 rows=0 loops=1)
   Buffers: shared hit=253420 read=1631805 dirtied=961860 written=757385, temp read=12896 written=12864
   ->  Hash Join  (cost=601224.31..903307.34 rows=147538 width=18) (actual time=42525.840..58400.239 rows=1000000 loops=1)
         Output: test01.ctid, test01_1.ctid, test01_2.ctid
         Hash Cond: (test01.id = test01_1.id)
         Buffers: shared hit=32649 read=852576 dirtied=130708 written=163750, temp read=12896 written=12864
         ->  Seq Scan on public.test01  (cost=0.00..298025.75 rows=295075 width=10) (actual time=5791.800..15806.948 rows=1000000 loo
ps=1)
               Output: test01.ctid, test01.id
               Buffers: shared hit=31 read=295044 dirtied=1 written=37108
         ->  Hash  (cost=601221.81..601221.81 rows=200 width=20) (actual time=36733.606..36733.606 rows=1000000 loops=1)
               Output: test01_1.ctid, test01_1.id, test01_2.ctid, test01_2.id
               Buckets: 65536 (originally 1024)  Batches: 16 (originally 1)  Memory Usage: 3932kB
               Buffers: shared hit=32618 read=557532 dirtied=130707 written=126642, temp read=3906 written=8931
               ->  HashAggregate  (cost=601219.81..601221.81 rows=200 width=20) (actual time=36002.291..36486.422 rows=1000000 loops=
1)
                     Output: test01_1.ctid, test01_1.id, test01_2.ctid, test01_2.id
                     Group Key: test01_1.id
                     Buffers: shared hit=32618 read=557532 dirtied=130707 written=126642, temp read=3906 written=3904
                     ->  Hash Join  (cost=298767.94..600850.97 rows=147538 width=20) (actual time=22935.529..34965.227 rows=1000000 l
oops=1)
                           Output: test01_1.ctid, test01_1.id, test01_2.ctid, test01_2.id
                           Hash Cond: (test01_1.id = test01_2.id)
                           Buffers: shared hit=32618 read=557532 dirtied=130707 written=126642, temp read=3906 written=3904
                           ->  Seq Scan on public.test01 test01_1  (cost=0.00..298025.75 rows=295075 width=10) (actual time=0.043..10
543.514 rows=1000000 loops=1)
                                 Output: test01_1.ctid, test01_1.id
                                 Buffers: shared hit=16308 read=278767
                           ->  Hash  (cost=298765.44..298765.44 rows=200 width=10) (actual time=22654.060..22654.060 rows=1000000 loo
ps=1)
                                 Output: test01_2.ctid, test01_2.id
                                 Buckets: 131072 (originally 1024)  Batches: 2 (originally 1)  Memory Usage: 22527kB
                                 Buffers: shared hit=16310 read=278765 dirtied=130707 written=126642, temp written=1951
                                 ->  HashAggregate  (cost=298763.44..298765.44 rows=200 width=10) (actual time=21982.196..22406.438 r
ows=1000000 loops=1)
                                       Output: test01_2.ctid, test01_2.id
                                       Group Key: test01_2.id
                                       Buffers: shared hit=16310 read=278765 dirtied=130707 written=126642
                                       ->  Seq Scan on public.test01 test01_2  (cost=0.00..298025.75 rows=295075 width=10) (actual ti
me=0.007..20551.162 rows=1000000 loops=1)
                                             Output: test01_2.ctid, test01_2.id
                                             Buffers: shared hit=16310 read=278765 dirtied=130707 written=126642
 Planning time: 9.100 ms
 Execution time: 404113.859 ms
(37 rows)

 

 

EDB Team Member

Re: memory usage is high and OOM is occured

Hi Mark,

 

You can also set the work_mem at the session level if that is the concern over here as it may be possible that the queries in your environment may be using extensive joins and sorting which may require more work_mem.

 

set work_mem='20MB'

 
Adventurer

Re: memory usage is high and OOM is occured

Hi ankit818 Thanks for your reply.I will try to ajust work_mem size.