optimizing delete's
Date: 20 Jan 1994 21:31:45 GMT
Message-ID: <2hmt81$15a_at_aurns1.aur.alcatel.com>
In order to avoid a massive stupidity, I need to ask :
What methods are normally used to speed up massive delete's in
>100K row, multiply-indexed tables?
I have a set of tables, each with multiple indexes, that will be periodically updated with new information. The new data must completely replace the old, indicating that all of the old information must be removed before the new data is loaded. The rows to be removed are not the only rows in the table, ie, I can't just drop the table.
New data will be loaded relatively frequently, several times a week, at least.
The 'delete from table_xxx where field1 = "foobar"' takes "Too Long".
The proposed solution is to create multiple tables, using tablenames fabricated using what used to be the value of field1 concatenated to the old tablename. One set of tables for each set of data in the db. On update, drop the existing tables, create new ones, load the data and add the indexes.
Because the amount of data in any one set of tables is limited, this ends up being very fast. My objections are that I can't know ahead of time what the table names are for any query, I have to first query to find the appropriate table name, then build the real query. Multiple table queries, say something that needs to query each set of tables would be a nightmare.
There has to be a better way. Any ideas?
--- Alan Davis aur : davis_at_aurfs1 | Employed by, but not CAD Specialist BIX : adavis | speaking for Alcatel Alcatel Network Systems CIS : 72317,3661 | Network Systems. davis_at_aur.alcatel.com ---------------------------------------------------Received on Thu Jan 20 1994 - 22:31:45 CET