Re: Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Heikki Linnakangas" <heikki@xxxxxxxxxxxxxxxx> writes:
> For 8.4, it would be nice to improve that. I tested that on my laptop 
> with a similarly-sized table, inserting each row in a pl/pgsql function 
> with an exception handler, and I got very similar run times. According 
> to oprofile, all the time is spent in TransactionIdIsInProgress. I think 
> it would be pretty straightforward to store the committed subtransaction 
> ids in a sorted array, instead of a linked list, and binary search.

I think the OP is not complaining about the time to run the transaction
that has all the subtransactions; he's complaining about the time to
scan the table that it emitted.  Presumably, each row in the table has a
different (sub)transaction ID and so we are thrashing the clog lookup
mechanism.  It only happens once because after that the XMIN_COMMITTED
hint bits are set.

This probably ties into the recent discussions about eliminating the
fixed-size allocations for SLRU buffers --- I suspect it would've run
better if it could have scaled up the number of pg_clog pages held in
memory.

			regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux