Hello everyone, I have a database with a table with the following structure:
CREATE TABLE videos ( id int8 NULL, data bytea NULL );
and in it binary files are stored, it is not in my hands to modify its structure, and at the time of writing / reading a specific row it gives me the following error (in that table I have 5 records):
select * from videos where id = 2;
SQL Error [XX000]: ERROR: invalid memory alloc request size 1180388273
I can get all the other rows without problems
if I consult the size of the binaries I get:
select id, length (data) as tam from videos
as you can see the id 2 has a size of 590 MB (it is the only one that exceeds 500 MB), I understand that you can store up to 1 GB
the size that returns the error (1180388273) is practically twice the size of the binary error
if I consult the toast table that stores the information of that table videos I get
select chunk_id, count (chunk_seq) from pg_toast.pg_toast_245165 group by 1
The only way to write and read this row is with the internal functions of PostgreSQL: lo_export (to write to filesystem the bytea) and lo_import (to load from files system the byta)
Do you have any idea how to solve this or track to follow?
this is a bug?
I can not even do pg_dump, I have the same error:
pg_dump: The extraction of the contents of the "videos" table failed: PQgetResult () failed.
pg_dump: Server error message: ERROR: invalid memory alloc request size 1180388273
pg_dump: The order was: COPY public.videos (id, data) TO stdout;
I have changed from disk, server and OS to CentOS 7 the same thing happens, even from PostgreSQL 9.6,
I have tried in the versions:
EnterpriseDB 10.4.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18), 64-bit
PostgreSQL 10.5 (Ubuntu 10.5-1.pgdg16.04 + 1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1 ~ 16.04.10) 5.4.0 20160609, 64-bit
PostgreSQL 9.6.10 on x86_64-pc-linux-gnu (Ubuntu 9.6.10-1.pgdg16.04 + 1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1 ~ 16.04.10) 5.4.0 20160609, 64-bit
And the platform is:
We have consulted regarding this issue with our Engineering team and following are their inputs.
1. The root cause for the memory issue you are facing is that you are selecting the output of a bytea column in text mode(i.e. either for SELECT or pg_dump), it requires more memory than the size of the underlying datum. 2. The limit for such memory allocations in Postgres is 1GB.
As when you try to store a video file in Postgres database lo_import and lo_export will perfectly fine. However, whenever you try to retrieve the data via SELECT or pg_dump it will give and memory error(except small video files).
As you have already mentioned in the previous comments that you are storing the video files more than 500MB. Hence, you will always get the following error " ERROR: invalid memory alloc request size " while SELECT and pg_dump.
Also as you re-iterated in the previous comment that lo_import and lo_export are working fine in your case. Hence, we are de-escalated this case from our internal Engineering team.
Hope this clears your doubts.
This indicates corruption of the table public.videos.
If you have a latest backup present with you please proceed with performing point-in-time recovery (PITR) of the table. After completing the PITR execute a pg_dump and make sure it completes successfully as it would address no corruption in the data.
If you do not have backup, then you will have to :
Do a binary search to figure out which row is corrupt :
select * from public.videos limit <N>;
Where N=1 to 5 as you have 5 records;
Now, whichever query gives you the error, find out the ctid of the row :
select ctid from public.videos limit <N>;
Its the last ctid thats the one :
delete from public.videos where ctid = '(id, id}';
Make sure your pg_dump executes successfully.
Hope this helps.
hi @chaitalirs, thanks for your response
but i dont think that is corrupt teh data on table public.videos, because i can read/write all data in the table, except thi row
I have noticed that every time I write a file of more than 500 MB I get that error back, all the other files smaller than 500 MB work well for me
I can only access this row data from the table videos using the functions lo_export (to write to filesystem byte) and lo_import (to load from files system byte)
Can you make a reproduction of this by loading a file of more than 500 MB?
in a table with this structure:
CREATE TABLE videos ( id int8 NULL, dato bytea NULL ); ### python script to load file import psycopg2 conn_string = "host = 'localhost' dbname = 'db' user = 'postgres' password = 'password' port = 5432" conn = psycopg2.connect (conn_string) # a2 is my video with 590 MB my_video = open ('/ tmp / a2', 'rb'). read () cursor = conn.cursor () ### query = "INSERT INTO public.videos (id, data) VALUES (% s,% s);" % (2, psycopg2.Binary (my_video)) cursor.execute ("INSERT INTO public.videos (id, data) VALUES (% s,% s);", (22, psycopg2.Binary (my_video))) conn.commit () ### here and get the error
with all the other files smaller than 500 MB it is inserted well
I could only load it with lo_import to file a2 I could only load it with the following function:
CREATE OR REPLACE FUNCTION public.bytea_import ( IN p_path text, OUT p_result bytea) RETURNS bytea AS $ BODY $ declare l_oid oid; begin select lo_import (p_path) into l_oid; select lo_get (l_oid) INTO p_result; perform lo_unlink (l_oid); end; $ BODY $ LANGUAGE plpgsql VOLATILE insert into videos select 2, bytea_import ('/tmp/a2') --insert OK select * from videos where id = 2; SQL Error [XX000]: ERROR: invalid memory alloc request size 1180388273
if I insert a file of 143 MB it is inserted well and I can read it with a select
the only way to get it out of the database is with the lo_export function:
CREATE OR REPLACE FUNCTION public.lo_export_bfile ( data bytea, filename text) RETURNS integer AS $ BODY $ DECLARE l_oid oid; lfd integer; lsize integer; msg text; msg_detail text; msg_err text; INV_WRITE int: = 131072; BEGIN l_oid: = lo_create (0); lfd: = lo_open (l_oid, INV_WRITE); - writing mode lsize: = lowrite (lfd, data); PERFORM lo_close (lfd); PERFORM lo_export (l_oid, filename); PERFORM lo_unlink (l_oid); return lsize; EXCEPTION WHEN OTHERS THEN GET STACKED DIAGNOSTICS msg = message_text, msg_detail = pg_exception_detail, msg_err = returned_sqlstate; RAISE EXCEPTION 'Error:%,%,%', msg, msg_detail, msg_err; END; $ BODY $ LANGUAGE plpgsql VOLATILE COST 100;
select public.lo_export_bfile (dato, '/tmp/video2') from videos where id=2
Do you need more information to reproduce it?
Thanks for sharing the details .
Currenty we are trying to reproduce the the issues with the help of details shared by you ,We will upate you soon with the details .
If required more details we will let you know .
This seems issue with insufficient free memory on the server. Is It possible for you to share memory usage statistics with us?
In addition to previous response sharing some more information here. Memory available on server should be at least equal to size of file user wish to store. e.g. user can’t store a 500MB file in a database if doesn’t have 500MB of memory available.
Thanks & Regards,
total used free shared buff/cache available Memoria: 15920 5816 1797 5234 8306 4427 Swap: 8300 595 7705
shared_buffers, work_mem, maintenance_work_mem
show shared_buffers ; shared_buffers ---------------- 4GB (1 fila)
show work_mem ;
show maintenance_work_mem ;
any other data?
It seems that there is not enough memory to convert the video file in bytea format.
Hence, request you to add additional memory to the server and let us know the results post that.
Awaiting for your words.