Home » Server Options » Text & interMedia » Updating index information without recreating the index AND Problem executing ctx procedures (merged
icon4.gif  Updating index information without recreating the index AND Problem executing ctx procedures (merged [message #270431] Wed, 26 September 2007 21:34 Go to next message
amazing
Messages: 38
Registered: September 2007
Location: Venezuela
Member
Hi. I have this base table that has an index. And I perform different operation on the base table like inserting, updating and deleting. This is my index structure, with substring_index and prefix_index set to true:

create index idx_my_index on my_table(my_colum)
indextype is ctxsys.context
parameters('LEXER my_lexer WORDLIST my_word_list
STORAGE my_storage STOPLIST my_stop_list
SYNC (ON COMMIT) TRANSACTIONAL');

When inserting there is no problem, but how can I (any ideas) update the data of the $i and $p tables when I update or delete without recreating the index..??
Problem executing ctx procedures... [message #270670 is a reply to message #270431] Thu, 27 September 2007 14:02 Go to previous messageGo to next message
amazing
Messages: 38
Registered: September 2007
Location: Venezuela
Member
I have this trigger that inside executes the ctx_ddl.sync_index procedure and return an ora-00900 error

this is the source:

create or replace trigger my_trigger
after update or delete on my_table
for each row
declare

synchronizaton_command string(100) := 'begin ctx_ddl.sync_index(''my_index'', ''10M''); end; ';

begin

execute immediate synchronizaton_command;

end my_trigger;
Re: Problem executing ctx procedures... [message #270726 is a reply to message #270670] Thu, 27 September 2007 20:39 Go to previous messageGo to next message
Barbara Boehmer
Messages: 7995
Registered: November 2002
Location: California, USA
Senior Member
What version of Oracle are you using?

If using 10g, then sync(on commit) should be all you need. However, you need to make sure that the column that the index is created on has been updated and committed. If, for example, using a multi_column_datastore, updating any one of the data columns is not sufficient to cause synchronization. You can use a before update row trigger to set the new value of the indexed column to the old value to do this.

If using 9i, then you need a statement trigger with ctx_ddl.sync_index, but you should not use dynamic sql like execute immediate to execute it. When executing anything from a stored procedure or trigger, you need privileges granted explicitly, not through a role. So, you will need to grant execute on cx_ddl explicitly, as just the ctx_apps role is insufficient.

If none of this solves your problem, then you need to provide your version and a copy and paste of a reproduction of the problem, including table creation, insert, index creation, including all preferences and attributes, synchronization method, update, commit, dbms_lock.sleep to allow time for synchronization after commit, and query showing that synchronization did not occur.
Re: Problem executing ctx procedures... [message #270729 is a reply to message #270726] Thu, 27 September 2007 21:07 Go to previous messageGo to next message
amazing
Messages: 38
Registered: September 2007
Location: Venezuela
Member
I'm using 10g

I resolve the procedure error by grant execute any procedure to my_user.

This is the trigger source:

create or replace trigger my_trigger
after update or delete on my_base_table
for each row
declare

begin

ctx_ddl.sync_index('my_index', '10M');

ctx_ddl.optimize_index('my_index', 'FULL');



end my_base_table;

So, when I execute DML on my base table return ORA-04092 error cannot COMMIT in a trigger.

Any solution or sugestion to perform this operation dynamically?

You said that with 10g syn(on commit) is all i need. But when i delete records on my_base_table the $i and $p tables do not update the data.

Re: Problem executing ctx procedures... [message #270780 is a reply to message #270729] Fri, 28 September 2007 00:41 Go to previous messageGo to next message
Barbara Boehmer
Messages: 7995
Registered: November 2002
Location: California, USA
Senior Member
If you were using 9i, you would use dbms_job.submit in your trigger to run ctx_ddl.sync_index on commit. But, since you are using 10g, your need to drop the trigger, which may be preventing your sync(on commit) from working. If that does not work, then post what I previously requested. Sync(on commit) does work for updates and deletes, so you must be doing something wrong. I have provided some of the frequent causes already. You mentioned that it does not update the $i table. But do queries return the proper information? Perhaps you just don't understand how the $i table is "updated". New rows are added to the dr$... tables upon synchronization, rather than updating old rows. You still need to periodically rebuild the index to eliminate the fragmentation that any type of synchronization produces.
Re: Problem executing ctx procedures... [message #270793 is a reply to message #270729] Fri, 28 September 2007 01:04 Go to previous messageGo to next message
Barbara Boehmer
Messages: 7995
Registered: November 2002
Location: California, USA
Senior Member
The following is a copy of something that I posted on another website some time ago, that shows how rows are added to the dr$... tables, not updated, and how the fragmentation is eliminated upon optimization or rebuilding. Sync(on commit) is a good thing, but you probably should not be trying to optimize after each row that changes. It only shows the $i table. Some of the data that tells it which rows are current are in the other tables.



When you use "sync (on commit)", rows are added to your index when you commit. This makes it so that the data just entered is immediately searchable, but also results in fragmentation. To reduce the fragmentation, you can periodically rebuild or fully optimize. What you choose and how often depends on the circumstances. If your data needs to be immediately searchable, then you need to synchronize on commit. If you don't synchronize or rebuild or something then it won't be searchable. If you rebuild, it will be searchable and eliminate fragmentation, but it can take a while to rebuild and you don't want to wait for that. How often you decide to rebuild or optimize, in order to reduce fragmentation, depends on the amount of inserts, updates, and deletes. It might be nightly, weekly, or monthly. Please see the demonstration below, that illustrates what happens to the index when you synchronize on commit or fully optimize.

SCOTT@10gXE> CREATE TABLE t1 (doc VARCHAR2(60))
  2  /

Table created.

SCOTT@10gXE> Create index docindex on T1(doc) indextype is ctxsys.context
  2  parameters ('Sync (on commit)')
  3  /

Index created.

SCOTT@10gXE> INSERT INTO t1 VALUES ('cats dogs birds')
  2  /

1 row created.

SCOTT@10gXE> COMMIT
  2  /

Commit complete.

SCOTT@10gXE> COLUMN token_text FORMAT A30
SCOTT@10gXE> SELECT token_text, token_type, token_first, token_last, token_count
  2  FROM   dr$docindex$i
  3  /

TOKEN_TEXT                     TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
------------------------------ ---------- ----------- ---------- -----------
BIRDS                                   0           1          1           1
CATS                                    0           1          1           1
DOGS                                    0           1          1           1

SCOTT@10gXE> INSERT INTO t1 VALUES ('cats dogs horses')
  2  /

1 row created.

SCOTT@10gXE> COMMIT
  2  /

Commit complete.

SCOTT@10gXE> SELECT token_text, token_type, token_first, token_last, token_count
  2  FROM   dr$docindex$i
  3  /

TOKEN_TEXT                     TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
------------------------------ ---------- ----------- ---------- -----------
BIRDS                                   0           1          1           1
CATS                                    0           1          1           1
DOGS                                    0           1          1           1
CATS                                    0           2          2           1
DOGS                                    0           2          2           1
HORSES                                  0           2          2           1

6 rows selected.

SCOTT@10gXE> INSERT INTO t1 VALUES ('cats dogs mice')
  2  /

1 row created.

SCOTT@10gXE> COMMIT
  2  /

Commit complete.

SCOTT@10gXE> SELECT token_text, token_type, token_first, token_last, token_count
  2  FROM   dr$docindex$i
  3  /

TOKEN_TEXT                     TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
------------------------------ ---------- ----------- ---------- -----------
BIRDS                                   0           1          1           1
CATS                                    0           1          1           1
DOGS                                    0           1          1           1
CATS                                    0           2          2           1
DOGS                                    0           2          2           1
HORSES                                  0           2          2           1
CATS                                    0           3          3           1
DOGS                                    0           3          3           1
MICE                                    0           3          3           1

9 rows selected.

SCOTT@10gXE> UPDATE t1
  2  SET    doc = 'cats dogs rabbits'
  3  WHERE  doc = 'cats dogs mice'
  4  /

1 row updated.

SCOTT@10gXE> COMMIT
  2  /

Commit complete.

SCOTT@10gXE> SELECT token_text, token_type, token_first, token_last, token_count
  2  FROM   dr$docindex$i
  3  /

TOKEN_TEXT                     TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
------------------------------ ---------- ----------- ---------- -----------
BIRDS                                   0           1          1           1
CATS                                    0           1          1           1
DOGS                                    0           1          1           1
CATS                                    0           2          2           1
DOGS                                    0           2          2           1
HORSES                                  0           2          2           1
CATS                                    0           3          3           1
DOGS                                    0           3          3           1
MICE                                    0           3          3           1
CATS                                    0           4          4           1
DOGS                                    0           4          4           1
RABBITS                                 0           4          4           1

12 rows selected.

SCOTT@10gXE> DELETE FROM t1
  2  WHERE  doc = 'cats dogs rabbits'
  3  /

1 row deleted.

SCOTT@10gXE> COMMIT
  2  /

Commit complete.

SCOTT@10gXE> SELECT token_text, token_type, token_first, token_last, token_count
  2  FROM   dr$docindex$i
  3  /

TOKEN_TEXT                     TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
------------------------------ ---------- ----------- ---------- -----------
BIRDS                                   0           1          1           1
CATS                                    0           1          1           1
DOGS                                    0           1          1           1
CATS                                    0           2          2           1
DOGS                                    0           2          2           1
HORSES                                  0           2          2           1
CATS                                    0           3          3           1
DOGS                                    0           3          3           1
MICE                                    0           3          3           1
CATS                                    0           4          4           1
DOGS                                    0           4          4           1
RABBITS                                 0           4          4           1

12 rows selected.

SCOTT@10gXE> EXEC CTX_DDL.OPTIMIZE_INDEX ('docindex', 'FULL')

PL/SQL procedure successfully completed.

SCOTT@10gXE> SELECT token_text, token_type, token_first, token_last, token_count
  2  FROM   dr$docindex$i
  3  /

TOKEN_TEXT                     TOKEN_TYPE TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
------------------------------ ---------- ----------- ---------- -----------
BIRDS                                   0           1          1           1
HORSES                                  0           2          2           1
CATS                                    0           1          2           2
DOGS                                    0           1          2           2

SCOTT@10gXE>

Re: Problem executing ctx procedures... [message #270990 is a reply to message #270793] Fri, 28 September 2007 11:43 Go to previous messageGo to next message
amazing
Messages: 38
Registered: September 2007
Location: Venezuela
Member
First of all I want to thank you for the collaboration. Is very useful to me.

Now...to the problem...

I'm dealing with a database that have this table that store information about companies. We are talking about an specific table that have a huge amount of companies denomination (plain text). So I use the context index type to perform different optimized search operations. I do not have any problem with the result of the searches since I use syn(on commit). But the thing is that my drs_tablas grows to much because of the level of the data and we are talking about a huge and busy database and I want to optimize all I can. So I want to perform a

ctx_ddl.sync_index('my_index', '10M');

ctx_ddl.optimize_index('my_index', 'FULL');

rather than rebuild the index because it will be to much cost for the working database.

So...I think about an after update and delete trigger to perform sync_index and optimize_index but the way I think don't work because the error previously mentioned.

So...How can I dynamically perform these operation or any suggestion to perform after update or delete rows on the base table to update my drs_tables...???


Re: Problem executing ctx procedures... [message #271056 is a reply to message #270990] Fri, 28 September 2007 18:29 Go to previous messageGo to next message
Barbara Boehmer
Messages: 7995
Registered: November 2002
Location: California, USA
Senior Member
As I said before, you should not execute them dynamically. You should use dbms_job.submit within the trigger to execute them upon commit. But I suspect that the optimize will cause you more performance problems than it will solve.
Re: Problem executing ctx procedures... [message #271059 is a reply to message #270990] Fri, 28 September 2007 18:49 Go to previous messageGo to next message
Barbara Boehmer
Messages: 7995
Registered: November 2002
Location: California, USA
Senior Member
Here is an example that I posted previously on another forum.

scott@ORA92> CREATE TABLE test_files
  2    (test_col VARCHAR2(60))
  3  /

Table created.

scott@ORA92> CREATE INDEX file_content_idx
  2  ON test_files (test_col)
  3  INDEXTYPE IS CTXSYS.CONTEXT
  4  /

Index created.

scott@ORA92> CREATE OR REPLACE TRIGGER update_file_content
  2    AFTER INSERT OR UPDATE OR DELETE ON test_files
  3  DECLARE
  4    v_job NUMBER;
  5  BEGIN
  6    if deleting then
  7  	 DBMS_JOB.SUBMIT
  8  	   (v_job, 'ctx_ddl.optimize_index(''file_content_idx'',''FULL'');', SYSDATE);
  9    else
 10  	 DBMS_JOB.SUBMIT
 11  	   (v_job, 'ctx_ddl.sync_index(''file_content_idx'');', SYSDATE);
 12    end if;
 13  END;
 14  /

Trigger created.

scott@ORA92> SHOW ERRORS
No errors.
scott@ORA92> INSERT INTO test_files (test_col)
  2  VALUES ('first inserted test record')
  3  /

1 row created.

scott@ORA92> COMMIT
  2  /

Commit complete.

scott@ORA92> EXEC DBMS_LOCK.SLEEP (15)

PL/SQL procedure successfully completed.

scott@ORA92> SELECT token_text FROM dr$file_content_idx$i
  2  /

TOKEN_TEXT
----------------------------------------------------------------
FIRST
INSERTED
RECORD
TEST

scott@ORA92> SELECT * FROM test_files
  2  WHERE  CONTAINS (test_col, 'first') > 0
  3  /

TEST_COL
------------------------------------------------------------
first inserted test record

scott@ORA92>

Re: Problem executing ctx procedures... [message #271061 is a reply to message #271056] Fri, 28 September 2007 20:05 Go to previous messageGo to next message
amazing
Messages: 38
Registered: September 2007
Location: Venezuela
Member
Why do you said that the optimize will cause more performance problems than it will solve? Is this not the best choise?
Re: Problem executing ctx procedures... [message #271066 is a reply to message #271061] Fri, 28 September 2007 21:56 Go to previous messageGo to next message
Barbara Boehmer
Messages: 7995
Registered: November 2002
Location: California, USA
Senior Member
Optimizing can take a long time. I wouldn't do it after every insert or update or delete. If you think you need to optimize frequently, you might at least try scheduling it to run hourly using dbms_job.submit, but not from a trigger. As I said before, for 10g, I would jut use sync(on commit) without the trigger. The example that I posted with the trigger was for 9i with special circumstances. If you have a situation where you receive occasional batches of DML in one statement, then such a trigger is appropriate.
Re: Problem executing ctx procedures... [message #271067 is a reply to message #271066] Fri, 28 September 2007 22:05 Go to previous message
amazing
Messages: 38
Registered: September 2007
Location: Venezuela
Member
Once again thanks for the information.
Previous Topic: Disable indexing for specific records
Next Topic: Oracle Text query with multiple contains performance
Goto Forum:
  


Current Time: Tue Oct 21 21:49:25 CDT 2014

Total time taken to generate the page: 0.08907 seconds