Query slows after offset of 100K

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I've been reading through the performance list of the last few months, and haven't been able to find a solution to my problem yet, so I'm posting the specifics here now.  If anyone can suggest what might work (or point me to where this has been covered before), that would be great.  My current suspicion is that the shared_buffers setting is far too low.

My query is as follows:
SELECT o.objectid, o.objectname, o.isactive, o.modificationtime 
FROM    object o 
WHERE  ( o.deleted = false OR o.deleted IS NULL ) 
AND      o.accountid = 111 
ORDER BY 2 
LIMIT 20 OFFSET 10000;

The object table has primary key objectid, an index on objectname, and a unique constraint on ( accountid, objectname ).
What I'm trying to do is show only 20 records to the user at a time, sorting on objectname, and the ones I display depend on the page they're on (that's why I've got LIMIT plus OFFSET, of course).

When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to the following:
 Limit  (cost=15357.06..15387.77 rows=20 width=35) (actual time=19.235..19.276 rows=20 loops=1)
   ->  Index Scan using account_objectname on "object" o  (cost=0.00..1151102.10 rows=749559 width=35) (actual time=0.086..14.981 rows=10020 loops=1)
         Index Cond: (accountid = 354)
         Filter: ((NOT deleted) OR (deleted IS NULL))
 Total runtime: 19.315 ms

If I move the offset up to 100K records or higher, I get:
 Limit  (cost=145636.26..145636.31 rows=20 width=35) (actual time=13524.327..13524.355 rows=20 loops=1)
   ->  Sort  (cost=145386.26..147260.16 rows=749559 width=35) (actual time=13409.216..13481.793 rows=100020 loops=1)
         Sort Key: objectname
         ->  Seq Scan on "object" o  (cost=0.00..16685.49 rows=749559 width=35) (actual time=0.011..1600.683 rows=749549 loops=1)
               Filter: (((NOT deleted) OR (deleted IS NULL)) AND (accountid = 354))
 Total runtime: 14452.374 ms

That's a huge decrease in performance, and I'm wondering if there's a way around it.
Right now there are about 750K records in the object table, and that number will only increase with time.
I've already run a VACUUM FULL on the table and played with changing work_mem, but so far am not seeing any improvement.

Are there any other settings I can change to get back to that super-fast index scan?  Is the shared_buffers = 2000 setting way too low?  The reason I haven't actually changed that setting is due to some system limitations, etc., that require more work than just a change in the config file.  If I can get confirmation that this is a likely cause/solution, then I can get the extra changes made.

I'm running a quad core 2.33GHz Xeon with 4GB memory (1.2GB free), using Postgres 8.1.11.

Thanks,
    Michael Lorenz
_________________________________________________________________
It's simple! Sell your car for just $30 at CarPoint.com.au
http://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Fsecure%2Dau%2Eimrworldwide%2Ecom%2Fcgi%2Dbin%2Fa%2Fci%5F450304%2Fet%5F2%2Fcg%5F801459%2Fpi%5F1004813%2Fai%5F859641&_t=762955845&_r=tig_OCT07&_m=EXT
---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux