Message-Id: <10600.115563@fatcity.com> From: Oliver Artelt Date: Fri, 25 Aug 2000 16:43:32 +0200 Subject: Perf of 10000+ tables in a db Good (morning | day | evening) everyone, I have to store horn clauses in a db to perform some AI/ML processing. OK, the facts could be stored as tables, the rules are mapped as views. (the rules could be recursive, but this should stay my problem :-). This system is used as data mining/disclosure system and many fact sets could be arising. (This is not a DW!) Storing them into separate tables could lead into many thousand tables filled with 10 -100 -1000 rows each, columns on every table <10. The amount of views are huge too, say twice as the amount of tables. These views have depencies each other -a view could 'call' three others. The queries will probably be inner joins over a few tables mostly selected via equal conditions. A master table for translating facts/rules into tables/views names is also used. Has anybody experience with such funky configuration? SAP-experts or similar? How works that system with its catalog data? Whats with tuning -only table scan or an automated analyzing and creating of indexes / automated materialisation of highly-used views? Whats with depencies between views -could be the recompiling of views a bottleneck? Questions and questions... I need your suggestions. Thanks, oli. Oliver Artelt, System- und Datenbankadministration --------------------------------------------------------------- cubeoffice GmbH & Co.KG # jordanstrasse 7 # 39112 magdeburg telefon: +49 (0)391 6 11 28 10 # telefax: +49 (0)391 6 11 28 10 email: oli@cubeoffice.de # web: http://www.cubeoffice.de ---------------------------------------------------------------