Feed aggregator

Linux: journalctl and systemd - better than /var/log/messages?

Dietrich Schroff - Fri, 2018-09-28 13:51
Nearly 8 years ago systemd was introduced on some Linux distribution (s. here). Last week i discovered some helpful commands, which i share with you.

If you want to take a look at kernel message (for example system boot), you command is
dmesg
The new equivalent is
journalctl -k
Ok - not really amazing.
But all of you know the message
See "systemctl status nginx.service" and "journalctl -xe" for details.
You can run the "systemctl start/restart/stop" and in case of error open the logs with "journalctl -xe". I would recommend to open a seperate shell and run there
journdalctl -f
This is something like "tail -f" to the systemd-journal.
If you do a "systemctl restart network" the shell with journalcctl -f shows the DHCP waiting for a answer from the server and you know why its so slow. You especially know, that your fifth interface has DHCP enabled and there is no DHCP, which slows down every "systemctl restart network".

journalctl has some nice filters like
journalctl -p 0..4This just shows the message with
  • "emerg" (0), 
  • "alert" (1), 
  • "crit" (2), 
  • "err" (3), 
  • "warning" (4), 
  • "notice" (5),
  • "info" (6), "debug" (7)
Or filter for something like network messages:
journalctl -u NetworkManager

And my favourite: Pipe your own log messages into the systemd-journal:
echo This is important | systemd-cat -t MightyJournal -p notice
Which result in this entry:
Sep 28 20:48:55 zerberus MightyJournal[28520]: This is important

Oracle Offline Persistence Toolkit - After Request Sync Listener

Andrejus Baranovski - Fri, 2018-09-28 11:15
In my previous post, we learned how to handle replay conflict - Oracle Offline Persistence Toolkit - Reacting to Replay Conflict. Additional important thing to know - how to handle response from request which was replayed during sync (we are talking here about PATCH). It is not as obvious as handling response from direct REST call in callback (there is no callback for response which is sinchronised later). You may think, why you would need to handle response, after successful sync. Well there could be multiple reasons - for instance you may read returned value and update value stored on the client.

Listener is registered in Persistence Manager configuration, by adding event listener of type syncRequest for given endpoint:


This is listener code. We are getting response, reading change indicator value (it was updated on the backend and new value is returned in response) and storing it locally on the client. Additionally we maintain array with mapping of change indicator value to updated row ID (in my next post I will explain why this is needed). After request listener must return promise:


On runtime - when request sync is executed, you should see in the log message printed, which shows new change indicator value:


Double check in payload, to make sure request was submitted with previous value:


Check response, you will see new value for change indicator (same as in after request listener):


Sample code can be downloaded from GitHub repository.

Hacking for Skew

Jonathan Lewis - Fri, 2018-09-28 07:23

In my presentation to the UKOUG SIG yesterday “Struggling with Statistics – part 2” I described a problem that I wrote about a few months ago: when you join a fact table with a massively skewed distribution on one of the surrogate key columns to a dimension holding the unique list of keys and descriptions a query against a description “loses” the skew. Here’s an demo of the problem that’s a little simpler than the one in the previous article.


rem
rem     Script:         bitmap_join_histogram.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2016
rem     Updated:        Sep 2018
rem 

execute dbms_random.seed(0)

create table facts
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual 
        connect by 
                level <= 1e4 --> comment to avoid wordpress format issue
)
select
        rownum                                  id,
        trunc(3 * abs(dbms_random.normal))      id_status,
        lpad(rownum,10,'0')                     v1,
        lpad('x',100,'x')                       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 --> comment to avoid wordpress format issue
;

alter table facts add constraint fct_pk primary key(id);
alter table facts modify id_status not null;

create table statuses
as
select
        id,
        chr(65 + id)            status_code,
        rpad('x',100,'x')       description
from    (
        select
                distinct(id_status)             id
        from
                facts
        )
;

alter table statuses modify status_code not null;

alter table statuses add constraint sta_pk primary key (id);
alter table facts add constraint fct_fk_sta foreign key (id_status) references statuses(id);

create bitmap index fct_b1 on facts(id_status);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'facts',
                method_opt       => 'for all columns size skewonly'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'statuses',
                method_opt       => 'for all columns size 254'
        );
end;
/

The definition of the facts.id_status column means I get a nice skewing effect on the data and this is what my data looks like:


select id_status, count(*) from facts group by id_status order by id_status;

 ID_STATUS   COUNT(*)
---------- ----------
         0      26050
         1      23595
         2      18995
         3      13415
         4       8382
         5       4960
         6       2643
         7       1202
         8        490
         9        194
        10         55
        11         17
        12          2

13 rows selected.

The statuses table translates the numbers 0 – 12 into the letters ‘A’ – ‘M’.

A quick check will show you that there are 55 rows for id_status = 10, which means 55 rows for status_code = ‘K’. So what happens when we write the two queries that should show us these results. I don’t really care what the execution plans are at this point, I’m interested only in the optimizer’s estimate of cardinality – so here are two queries, each followed by its execution plan:


select
        sum(fct.id)
from
        facts   fct
where
        fct.id_status = 10
;


-----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |        |     1 |     8 |    12   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE                      |        |     1 |     8 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| FACTS  |    55 |   440 |    12   (0)| 00:00:01 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |        |       |       |            |          |
|*  4 |     BITMAP INDEX SINGLE VALUE        | FCT_B1 |       |       |            |          |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("FCT"."ID_STATUS"=10)


select
        sum(fct.id)
from
        facts           fct,
        statuses        sta
where
        fct.id_status = sta.id
and     sta.status_code = 'K'
;

--------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |     1 |    13 |   233   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |          |     1 |    13 |            |          |
|*  2 |   HASH JOIN         |          |  7692 | 99996 |   233   (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     1 |     5 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229   (3)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='K')

The estimated cardinality for the query against the base column reflects the value 55 from the histogram, but the estimated cardinality of the join is 7,692 – which is num_rows(facts) / num_distinct(id_status). Oracle has lost information about the skew. There is a way to get Oracle to produce a correct estimate (shown in the previous article) by rewriting the join as an IN subquery with the (undocumented) “precompute_subquery” hint, but there is an alternative which David Kurtz hypothesized in a conversation after the presentation was over (in fact someone else had described their use of exactly his suggested approach in a comment on a much older blog note about this problem): take the histogram from the id_status column on the facts table and “apply it” to the status_code column on the statuses table. In discussion with David I expressed the opinion that this probably shouldn’t work, and it wasn’t really a bit of fakery I’d want to apply to a production system – but we both tried it when we got home … with differing degrees of success.

Here’s a piece of code that I inserted into my script immediately after gathering stats on the statuses table. I’ll explain the details below as it makes a couple of assumptions that need to be pointed out:


declare

        srec                    dbms_stats.statrec;

        m_distcnt               number;
        m_density               number;
        m_nullcnt               number;
        m_avgclen               number;

        c_array                 dbms_stats.chararray;

begin

        dbms_stats.get_column_stats(
                ownname         => 'test_user',
                tabname         => 'facts',
                colname         => 'id_status',
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => srec,
                avgclen         => m_avgclen
        ); 

        srec.bkvals := dbms_stats.numarray();
        c_array     := dbms_stats.chararray();

        for r in (
                select  stt.status_code, count(*) ct
                from    facts fct, statuses stt
                where   stt.id = fct.id_status
                group by
                        stt.status_code
                order by
                        stt.status_code
        ) loop

                c_array.extend;
                c_array(c_array.count) := r.status_code;
                srec.bkvals.extend;
                srec.bkvals(srec.bkvals.count) := r.ct;

        end loop;

        dbms_stats.prepare_column_values(srec, c_array);

        dbms_stats.set_column_stats(
                ownname         => 'test_user',
                tabname         => 'statuses',
                colname         => 'status_code',
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => srec,
                avgclen         => m_avgclen
        ); 

end;
/

alter system flush shared_pool;

The code isn’t intended to be efficient, and I’ve been a bit lazy in setting up the content.

The first step gets the column stats from facts.id_status – and I know that I’ve got a frequency histogram that covers exactly the right number of distinct values on that column so almost everything is set up correctly to copy the stats across to statuses.status_code, except one column is numeric and the other is character and (although I know it’s true because of the way I defined the status_code values) I need to ensure that the bucket values I write to the status_code need to be arranged in alphabetic order of status_code.

So my second step is to run a query against the facts table to get the counts of status_code in alphabetical order and copy the results in order into a pair of arrays – one being a standalone array of the type defined in the dbms_stats package as an array of character types, the other being the array of bucket values that already exists in the stats record for the facts.id_status column that I’ve pulled into memory. (The bucket values array is stored as cumulative frequency values, so I do have to overwrite it with the simple frequency values at this point).

Finally I “prepare column values” and “set column stats” into the correct column, and the job is done. The flush of the shared pool is there to avoid any accidents of cursors surviving previous tests and causing confusion.

So what happens when I run a couple of queries with these faked stats in place ?

set autotrace traceonly explain

select  
        sum(fct.id)
from
        facts           fct,
        statuses        sta
where
        fct.id_status = sta.id
and     sta.status_code = 'K'
;


--------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |     1 |    14 |   233   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |          |     1 |    14 |            |          |
|*  2 |   HASH JOIN         |          |    55 |   770 |   233   (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     1 |     6 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229   (3)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='K')



select
        sum(fct.id)
from
        facts           fct,
        statuses        sta
where
        fct.id_status = sta.id
and     sta.status_code = 'D'
;


--------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |     1 |    14 |   233   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |          |     1 |    14 |            |          |
|*  2 |   HASH JOIN         |          | 13415 |   183K|   233   (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     2 |    12 |     2   (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229   (3)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='D')

Querying for ‘K’ the prediction is 55 rows, querying for ‘D’ the prediction is for 13,415 rows – both estimates are exactly right. Wow !!!

Problem – that’s not what David Kurtz saw. In an email to me he said: “To my surprise, if I fake a histogram on the dimension table using the skew on the join column from the fact table I do get the correct number of rows calculated in the execution plan (provided it is less than the value if the histogram was not present)”. To make that concrete – when he queried for ‘K’ he got the correct prediction, when he queried for ‘D’ he was back to a prediction of 7,692. Looking at the report of the actual data, he’d get the right prediction for codes ‘F’ to ‘M’ and the wrong prediction for codes ‘A’ to ‘E’.

So what went wrong (and with whom) ?

When I run up new tests I tend to test Oracle versions in the order 12.1.0.2, then 11.2.0.4, then 12.2.0.1, then 18.3.0.0 – it’s the order of popularity that I currently see. So I was running my test on 12.1.0.2; David was running his test on 18.3.0.0. So I jumped a step and ran my test on 12.2.0.1: here are my results when querying for status_code = ‘D’:


--------------------------------------------------------------------------------
| Id  | Operation	    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |	       |     1 |    14 |   233	 (4)| 00:00:01 |
|   1 |  SORT AGGREGATE     |	       |     1 |    14 |	    |	       |
|*  2 |   HASH JOIN	    |	       |  7692 |   105K|   233	 (4)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| STATUSES |     1 |     6 |     2	 (0)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| FACTS    |   100K|   781K|   229	 (3)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("FCT"."ID_STATUS"="STA"."ID")
   3 - filter("STA"."STATUS_CODE"='D')

As David has seen with 18.3, Oracle used the num_distinct to estimate the cardinality for  ‘D’. (It still used the value indicated by the histogram for ‘K’.) When I set the optimizer_features_enable parameter back to 12.1.0.2 the cardinality estimate for ‘D’ wentback to 13,415 – so it looks as if this is a deliberate piece of coding. 172 fix controls and 31 optimizer state parameters changed, but none of the more likely looking candidates had any effect when I tried testing them separately; possibly there’s a new sanity check when the number of rows recorded for the table is a long way off the total histogram bucket count.

I took a quick look at the 10053 trace in 12.2, with and without the change to optimizer_features_enable. The key difference was in the single table access path analysis – which didn’t give me any further clues.

With optimizer_features_enable = 12.1.0.2
=========================================
Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='K'
  Estimated selectivity: 5.5000e-04 , endpoint value predicate, col: #2

Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='D'
  Estimated selectivity: 0.134150 , endpoint value predicate, col: #2


With optimizer_features_enable defaulting to 12.2.0.1
=====================================================
Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='K'
  Estimated selectivity: 5.5000e-04 , endpoint value predicate, col: #2


Access path analysis for STATUSES
***************************************
SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for STATUSES[STA]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE

 kkecdn: Single Table Predicate:"STA"."STATUS_CODE"='D'
  Estimated selectivity: 0.076923 , endpoint value predicate, col: #2


Bottom line on this – there’s at least one person who already uses this method to work around the optimizer limitation, they need to be careful when they upgrade to 12.2 (or above) as the method no longer works in all cases.

 

 

[Troubleshooting] Forms Issue: FRM-92050 failed to connect to server: forms/servlet in EBS (R12)

Online Apps DBA - Fri, 2018-09-28 07:16

Oracle Forms can be configured in two modes: Servlet and Socket In servlet mode, a Java servlet (called the Forms Listener servlet) manages the communication between the Forms Java client and the OracleAS Forms services. In socket mode, the desktop clients access the Forms server directly. What if you face Oracle Forms issue ‘FRM-92050: Failed […]

The post [Troubleshooting] Forms Issue: FRM-92050 failed to connect to server: forms/servlet in EBS (R12) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

HOW TO RESOLVE THE ERROR :- ora-01406 fetched column value was truncated

Tom Kyte - Fri, 2018-09-28 01:26
HOW TO RESOLVE THE ERROR :- ora-01406 fetched column value was truncated
Categories: DBA Blogs

Outer join with row archival

Tom Kyte - Fri, 2018-09-28 01:26
LiveSQL-Link: https://livesql.oracle.com/apex/livesql/s/hblhxmq40jtini45sivyqj4le <code> create table test_table (id number(10),name varchar2(10)) row archival; insert into test_table (id,name) values (1,'name1'); insert into test_table (id...
Categories: DBA Blogs

Find Closest Matching Single Record

Tom Kyte - Fri, 2018-09-28 01:26
I want a Query that should fetch a single record based on match conditions: table looks like <code> create table SERVICES ( srvc VARCHAR2(10) not null, location VARCHAR2(10), grp VARCHAR2(10), empno VARCHAR2(10), pric...
Categories: DBA Blogs

Documentum – Checking warnings&errors from an xPlore full re-index

Yann Neuhaus - Fri, 2018-09-28 00:00

When working with xPlore as a Full Text Server (indexing), there are a few ways to perform a full re-index. You can potentially do it from the IndexAgent UI, from the Dsearch UI, from the file system (with an ids.txt file for example, it is usually for a “small” number of r_object_id so that’s probably not an ideal way) or from the docbase (mass-queue, it’s not really a good way to do it either). Performing a full re-index from the xPlore Server directly will be faster because you remove a few layers where the Content Server asks for an index (the index queues) and expect an answer/result, that’s why I will in this blog only talk about the full re-index performed from the xPlore Server directly and below I will use a full re-index from the IndexAgent UI. For each of these cases, there might be a few warnings or errors along the re-index, some of which might be normal (password protected file), some others might not (timeout because xPlore heavily loaded).

The whole purpose of this blog is to show you how you can check these warnings/errors because there is no information about them directly displayed on the UI, you need to go find that information manually. These warnings/errors aren’t shown in the index queues since they weren’t triggered from the docbase but from the xPlore Server directly.

So first of all, you need to trigger a re-index using the IndexAgent:

  • Open the IndexAgent UI (https://<hostname>:<ia_port>/IndexAgent)
  • Login with the installation owner’s account
  • Stop the IndexAgent if it is currently running in Normal mode and then launch a re-index operation

It should look like that (for xPlore 1.6):
IA1

On the above screenshot, the green represents the success count and the blue is for the filtered count. Once completed and as shown above, you might have a few warnings/errors but you don’t have any information about them as I mentioned previously. To narrow down and facilitate the check of the warnings/errors, you need to know (approximately) the start and end time of the re-index operation: 2018-06-12 11:55 UTC to 2018-06-12 12:05 UTC for the above example. From that point, the analysis of the warnings/errors can be done in two main ways:

 

1. Using the Dsearch Admin

I will start with the way that most of you probably already know: use the Dsearch reports to see the errors/warnings. That’s not the fastest way, clearly not the funniest way either but it is an easy way for sure…

Accessing the reports from the Dsearch Admin:

  • Open the Dsearch Admin UI (https://<hostname>:<ds_port>/dsearchadmin)
  • Login with the admin account (or any other valid account with xPlore 1.6+)
  • Navigate to: Home > Diagnostic and Utilities > Reports
  • Select the “Document Processing Error Summary” report and set the following:
    • Start from: 2018-06-12 11:55
    • To: 2018-06-12 12:05
    • Domain name (optional): leave empty if you only have one IndexAgent, otherwise you can specify the domain name (usually the same name as the docbase)
  • Click on Run to get the report

At this point, you will have a report with the number of warnings/errors per type, meaning that you do not have any information about the documents yet, you only know the number of errors for each of the pre-defined error types (=error code). For the above example, I had 8 warnings once the re-index was completed and I could see them all (seven warnings for ‘777’ and one warning for ‘770’):
IA2

Base on the information from this “Document Processing Error Summary” report, you can go deeper and find the details about the documents but you can only do it for one type, one Error Code, at a time. Therefore, you will have to loop on all Error Codes returned:

  • For each Error Code:
    • Select the “Document Processing Error Detail” report and set the following:
      • Start from: 2018-06-12 11:55
      • To: 2018-06-12 12:05
      • Domain name (optional): leave empty if you only have 1 IndexAgent, otherwise you can specify the domain name (usually the same name as the docbase)
      • Processing Error Code: Select the Error Code you want to see (either 777 or 770 in my case)
      • Number of Results to Display: Set here the number of items you want to display, 10, 20, …
    • Click on Run to get the report

And there you finally have the details about the warnings/errors documents that weren’t indexed properly because of the Error Code you choose. In my case, I selected 770 so I have only 1 document:
IA3

You can export this list to excel if you want, to do some processing on these items for example but you will need to do it for all Error Codes and then merge them or whatever.

 

2. Using the logs

In the above example, I used the IndexAgent to perform the re-index so I will use the IndexAgent logs to find what happened exactly. This section is really the main purpose of this blog because I assume that most people are using the Dsearch Admin reports already but probably not the logs! If you want to script the check of warnings/errors after a re-index of just if you want to play and have fun while doing your job, then this is what you need ;).

So let’s start simple: listing all errors and warnings and keeping only the lines that contain an r_object_id.

[xplore@full_text_server_01 ~]$ cd $JBOSS_HOME/server/DctmServer_Indexagent_DocBase1/logs/
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | egrep --color "[ (<][0-9a-z]{16}[>) ]"

Indexagent_DocBase1.log:2018-06-12 11:55:26,456 WARN PrepWorkItem [full_text_server_01_9200_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGNT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:27,518 INFO ReindexBatch [Worker:Finalization Action:#6][DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete btch. From a total of 45, 44 done, 0 filtered, 0 errors, and 8 warnings.
[xplore@full_text_server_01 logs]$

 

As you can see above, there is also one queue item (1b0f1234501327f0) listed because I kept everything that is 16 char long with 0-9 or a-z. If you want, you can rather select only r_object_id starting with 09 to have all dm_documents (using this: “[ (<]09[0-9a-z]{14}[>) ]” ) or you can just remove the r_object_id starting with 1b which are the queue items.

In the above example, all the results are in the timeframe I expected them to be but it is possible that there are older or newer warnings/errors so you might want to apply another filter with the date. Since I want everything from 11:55 to 12:05 on the 12-Jun-2018, this is how I can do it (and removing the log file name too) using a time regex:

[xplore@full_text_server_01 logs]$ time_regex="2018-06-12 11:5[5-9]|2018-06-12 12:0[0-5]"
[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep --color "[ (<][0-9a-z]{16}[>) ]"

2018-06-12 11:55:26,456 WARN PrepWorkItem [full_text_server_01_9200_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGNT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:27,518 INFO ReindexBatch [Worker:Finalization Action:#6][DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete btch. From a total of 45, 44 done, 0 filtered, 0 errors, and 8 warnings.
[xplore@full_text_server_01 logs]$

 

Listing only the messages for each of these warnings/errors:

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,^[^]]*],,' \
                                   | sort -u

[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.
[xplore@full_text_server_01 logs]$

 

Listing only the r_object_id (to resubmit them via the ids.txt for example):

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,.*[ (<]\([0-9a-z]\{16\}\)[>) ].*,\1,' \
                                   | sort -u \
                                   | grep -v "^1b"

090f12345007f40e
090f1234500aa9f6
090f1234500aaa97
090f1234500aaa98
090f1234500aaa99
090f1234500aaa9a
090f1234500aaa9b
090f1234500aaa9d
[xplore@full_text_server_01 logs]$

 

If you want to generate the iapi commands to resubmit them all:

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,.*[ (<]\([0-9a-z]\{16\}\)[>) ].*,\1,' \
                                   | sort -u \
                                   | grep -v "^1b"
                                   | sed 's/.*/queue,c,&,dm_fulltext_index_user/'

queue,c,090f12345007f40e,dm_fulltext_index_user
queue,c,090f1234500aa9f6,dm_fulltext_index_user
queue,c,090f1234500aaa97,dm_fulltext_index_user
queue,c,090f1234500aaa98,dm_fulltext_index_user
queue,c,090f1234500aaa99,dm_fulltext_index_user
queue,c,090f1234500aaa9a,dm_fulltext_index_user
queue,c,090f1234500aaa9b,dm_fulltext_index_user
queue,c,090f1234500aaa9d,dm_fulltext_index_user
[xplore@full_text_server_01 logs]$

 

Finally, to group the warnings/errors per types:

[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `egrep -i "err|warn" Indexagent_*.log* \
                                     | sed 's,^[^:]*:,,' \
                                     | egrep "${time_regex}" \
                                     | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                     | sed 's,^[^]]*],,' \
                                     | sort -u \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Listing warnings/errors with the following messages: ${type}";
                                     egrep -i "err|warn" Indexagent_*.log* \
                                       | sed 's,^[^:]*:,,' \
                                       | egrep "${time_regex}" \
                                       | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                       | sed 's,^[^]]*],,' \
                                       | sort -u \
                                       | grep -F "${type}";
                                     echo;
                                   done

  --  Listing warnings/errors with the following messages: [Corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].

  --  Listing warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.

  --  Listing warnings/errors with the following messages: [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].

[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ # Or to shorten a little bit the loop command:
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ command='egrep -i "err|warn" Indexagent_*.log* | sed 's,^[^:]*:,,'
                                   | egrep "${time_regex}"
                                   | egrep "[ (<][0-9a-z]{16}[>) ]"
                                   | sed 's,^[^]]*],,'
                                   | sort -u'
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `eval ${command} \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Listing warnings/errors with the following messages: ${type}";
                                     eval ${command} \
                                       | grep -F "${type}";
                                     echo;
                                   done

  --  Listing warnings/errors with the following messages: [Corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].

  --  Listing warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.

  --  Listing warnings/errors with the following messages: [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].

[xplore@full_text_server_01 logs]$

 

So the above was related to a very simple example where a full reindex took only a few minutes because it is a very small repository. But what about a full reindex that takes days because there are several millions of documents? Well the truth is that checking the logs might actually surprise you because it is usually more accurate than checking the Dsearch Admin. Yes, I said more accurate!

 

3. Accuracy of the Dsearch Admin vs the Logs

Let’s take another example with a repository containing a few TB of documents. A full re-index took 2.5 days to complete and in the commands below, I will check the status of the indexing for the 1st day: from 2018-09-19 07:00:00 UTC to 2018-09-20 06:59:59 UTC. Here is what the Dsearch Admin is giving you:

IA4

So based on this, you would expect 1 230 + 63 + 51 = 1 344 warnings/errors. So what about the logs then? I included below the DM_INDEX_AGENT_REINDEX_BATCH which are the “1b” object_id (item_id) I was talking about earlier but these aren’t document indexing, they are just batches:

[xplore@full_text_server_01 logs]$ time_regex="2018-09-19 0[7-9]|2018-09-19 [1-2][0-9]|2018-09-20 0[0-6]"
[xplore@full_text_server_01 logs]$ command='egrep -i "err|warn" Indexagent_*.log* | sed 's,^[^:]*:,,'
                                   | egrep "${time_regex}"
                                   | egrep "[ (<][0-9a-z]{16}[>) ]"
                                   | sed 's,^[^]]*],,'
                                   | sort -u'
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `eval ${command} \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Number of warnings/errors with the following messages: ${type}";
                                     eval ${command} \
                                       | grep -F "${type}" \
                                       | wc -l;
                                     echo;
                                   done

  --  Number of warnings/errors with the following messages: [Corrupt file]
51

  --  Number of warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
293

  --  Number of warnings/errors with the following messages: [DM_STORAGE_E_BAD_TICKET]
7

  --  Number of warnings/errors with the following messages: [Password-protected or encrypted file]
63

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction]
5

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 18, native msg: unknown error)]
1

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 257, native msg: handle is invalid)]
1053

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 30, native msg: out of memory)]
14

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 65534, native msg: unknown error)]
157

[xplore@full_text_server_01 logs]$

 

As you can see above, there is more granularity regarding the types of errors from the logs. Here are some key points in the comparison between the logs and the Dsearch Admin:

  1. In the Dsearch Admin, all messages that start with “Unknown error during text extraction” are considered as a single error type (N° 1023). Therefore from the logs, you can addition all of them: 5 + 1 + 1 053 + 14 + 157 = 1 230 to find the same number that was mentioned in the Dsearch Admin. You cannot separate them on the Dsearch Admin on the Error Summary report, it will only be on the Error Details report that you will see the full message and you can then separate them, kind of…
  2. You can find properly the same amount of “Password-protected or encrypted file” (63) as well as “Corrupt file” (51) from the logs and from the Dsearch Admin so no differences here
  3. You can see 7 “DM_STORAGE_E_BAD_TICKET” warnings/errors from the logs but none from the Dsearch Admin… Why is that? That’s because the Dsearch Admin do not have any Error Code for that so these errors aren’t shown!

So like I was saying at the beginning of this blog, using the Dsearch Admin is very easy but that’s not fun and you might actually miss a few information while checking the logs is funny and you are sure that you won’t miss anything (these 7 DM_STORAGE_E_BAD_TICKET errors for example)!

 

You could just as easily do the same thing in perl or using awk, that’s just a question of preferences… Anyway, you understood it, working with the logs allows you to do pretty much what you want but you will need some linux/scripting knowledge obviously while working with the Dsearch Admin is simple and easy but you will have to work with what OTX gives you and with the restrictions that it has.

 

 

Cet article Documentum – Checking warnings&errors from an xPlore full re-index est apparu en premier sur Blog dbi services.

What's New In Oracle Utilities Application Framework V4 Whitepaper updated

Anthony Shorten - Thu, 2018-09-27 15:53

The What's New In Oracle Utilities Application Framework V4 has been updated to reflect the latest changes implemented in Oracle Utilities Application Framework V4.3.0.6.0. The whitepaper has been updated with the latest information and is in the new whitepaper format to make it easier to read.

This is one of the last updates to this whitepaper as it was designed to help older Oracle Utilities Application Framework V2.x customers to understand the changes to the Oracle Utilities Application Framework since those releases. Given most of the information is already in the release notes and the vast majority of customers are in the process of migration or have migrated successfully, the whitepaper has limited use after the next service pack or major release.

The whitepaper is available for download from What's New In Oracle Utilities Application Framework V4 (Doc Id: 1177265.1) from My Oracle Support.

Column Group Catalog

Jonathan Lewis - Thu, 2018-09-27 11:16

I seem to have written a number of aricles about column groups – the rather special, and most useful, variant on extended stats. To make it as easy as possible to find the right article I’ve decided to produce a little catalogue (catalog) of all the relevant articles, with a little note about the topic each article covers. Some of the articles will link to others in the list, and there are a few items in the list from other blogs. There are also a few items which are the titles of drafts which have been hanging around for the last few years.

 

Oracle Will Seek Affirmation of Cost Award in the United States Supreme Court

Oracle Press Releases - Thu, 2018-09-27 10:32
Press Release
Oracle Will Seek Affirmation of Cost Award in the United States Supreme Court

Redwood Shores, Calif.—Sep 27, 2018

Today’s United States Supreme Court grant of certiorari in the Rimini Street case is limited to a narrow issue regarding an award of more than $12 million in litigation costs that Rimini Street was ordered to pay to Oracle. This award came after a jury found that Rimini Street infringed 93 Oracle copyrights, and this costs appeal has zero impact on the resolved issue that Rimini Street infringed 93 copyrights and the jury award of compensatory damages.

In 2016, the trial court found that Oracle was entitled to an award of attorneys’ fees and costs “because of Rimini’s repeated instances of copyright infringement and its significant litigation misconduct.” While the Supreme Court will take up this narrow costs issue as a matter of law, the underlying conduct is not in question.

“We look forward to addressing this costs issue in our nation’s highest court, and we believe that the Court should reject the attempt by Rimini Street—a dishonest, serial infringer—to avoid fully compensating Oracle for Rimini’s misconduct,” said Dorian Daley, Oracle’s Executive Vice President and General Counsel.

Contact Info
Deborah Hellinger
Oracle Corporate Communications
+1 212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1 212.508.7935

Renaming a RAC cluster

DBA Scripts and Articles - Thu, 2018-09-27 09:34

Introduction Renaming an Oracle RAC cluster is not an easy thing, unfortunately for me I had to do this today because the name chosen for the newly installed cluster was wrong. Oracle does not provide a simple command to do this and you have to go through a deconfiguration/reconfiguration of the whole cluster. Changing the … Continue reading Renaming a RAC cluster

The post Renaming a RAC cluster appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

create JSON from fields

Tom Kyte - Thu, 2018-09-27 07:06
Hi, I have relation 'resources' in database with 2 fields, 'id' and 'data'. 'id' eg. 25 and 'data' eg. <code>{"href":null,"id":"25","publicIdentifier":null,"description":null,"category":null,"validFor":null,"name":null,"lifecycleState":null,"type":"R...
Categories: DBA Blogs

DBMS_JOB.SUBMIT doesn't do anything in Oracle 12c (12.2.0.1.0)

Tom Kyte - Thu, 2018-09-27 07:06
Hi guys, I have a problem when I try to start a stored procedure with DBMS_JOB.SUBMIT. I have the package "dafneMultithread": <code> CREATE OR REPLACE PACKAGE DAFNE.dafneMultithread AS PROCEDURE start_job(p_procedure_name IN VARCHAR2...
Categories: DBA Blogs

Query all tables and all columns for a specific value

Tom Kyte - Thu, 2018-09-27 07:06
Hi Oracle Masters, I wonder if oracle is capable of returning the table name and column name based on a specific value only. e.g. Which table and column in oracle DB that has a value of 'ORACLE'? This is mainly for determining the mapping of th...
Categories: DBA Blogs

LISTAGG .. WITHIN GROUP (ORDER BY ..) is ignored in UPDATE .. RETURNING lause

Tom Kyte - Thu, 2018-09-27 07:06
Consider this script (which I've also put on Live SQL: https://livesql.oracle.com/apex/livesql/s/hbfb7x0c47ubwjpygs3klr3wj) <code>CREATE TABLE t ( id NUMBER(10) GENERATED ALWAYS AS IDENTITY NOT NULL PRIMARY KEY, category NUMBER(10) NOT NUL...
Categories: DBA Blogs

How do I find out if a particular trigger is executing or is being internally ignored from execution inn Oracle 11gR2?

Tom Kyte - Thu, 2018-09-27 07:06
Hi Tom, I am a frequent reader of AskTom. Your in-depth and precose answers are always great. I would like to know if there is any SQL (based on Data Dictionary, v$ or x$ views) that I can use to find out id a particular db trigger is executin...
Categories: DBA Blogs

Cleveland Clinic and Oracle Shape the Future of Healthcare

Oracle Press Releases - Thu, 2018-09-27 07:00
Press Release
Cleveland Clinic and Oracle Shape the Future of Healthcare Cloud-based software platform advances critical business systems and streamlines operational processes

Cleveland, OH and Redwood Shores, Calif—Sep 27, 2018

Cleveland Clinic and Oracle are improving healthcare delivery and reducing costs through a more efficient process and workflow. The cloud-based healthcare platform is expected to roll out at Cleveland Clinic London in 2019 and expand to Cleveland Clinic’s domestic locations in 2020.

The innovative healthcare platform benefits back-office personnel, providers and patients. It is scalable for domestic and international growth providing enhanced financial forecasting, revenue management and data integration, which allows for more effective and efficient decisions regarding supply procurement and inventory management.

“Cleveland Clinic has both an opportunity and an obligation to shape the future of healthcare while fostering a culture and environment to provide care in an exceptional way,” said William M. Peacock III, Cleveland Clinic Chief of Operations. “Our work with Oracle’s cloud technologies is a testament to our commitment to evolving our IT and finance platform, positioning us to support anticipated growth both domestically and internationally. Moving to the cloud will be critical in meeting these goals.”

Healthcare is a key industry for Oracle and with Oracle Enterprise Resource Planning (ERP) Cloud and Oracle Supply Chain Management (SCM) Cloud, Cleveland Clinic aims to transform its finance function by enabling increased productivity and lower costs. In addition, the health system will increase the efficiency of operations by taking advantage of real-time insights and automated supply and demand planning.

 

“Our customers play an important role in shaping our products, inspiring innovative features and defining industry-specific best practices for Oracle Cloud Applications,” said Steve Miranda, executive vice president, Applications Development, Oracle. “Cleveland Clinic is an innovative healthcare organization that recognizes the potential of the cloud to enhance productivity. Our collaboration will create a future-proof business platform for Cleveland Clinic’s global organization.”

Contact Info
Hope Buggey
Cleveland Clinic
216.444.8853
buggeyh@ccf.org
Bill Rundle
Oracle
415.990.3348
bill.rundle@oracle.com
About Cleveland Clinic

Cleveland Clinic is a nonprofit multispecialty academic medical center that integrates clinical and hospital care with research and education. Located in Cleveland, Ohio, it was founded in 1921 by four renowned physicians with a vision of providing outstanding patient care based upon the principles of cooperation, compassion and innovation. Cleveland Clinic has pioneered many medical breakthroughs, including coronary artery bypass surgery and the first face transplant in the United States. U.S. News & World Report consistently names Cleveland Clinic as one of the nation’s best hospitals in its annual “America’s Best Hospitals” survey. Among Cleveland Clinic’s 52,000 employees are more than 3,600 full-time salaried physicians and researchers and 14,000 nurses, representing 140 medical specialties and subspecialties. Cleveland Clinic’s health system includes a 165-acre main campus near downtown Cleveland, 11 regional hospitals, more than 150 northern Ohio outpatient locations – including 18 full-service family health centers and three health and wellness centers – and locations in Weston, Fla.; Las Vegas, Nev.; Toronto, Canada; Abu Dhabi, UAE; and London, England. In 2017, there were 7.6 million outpatient visits, 229,000 hospital admissions and 207,000 surgical cases throughout Cleveland Clinic’s health system. Patients came for treatment from every state and 185 countries. Visit us at clevelandclinic.org. Follow us at twitter.com/ClevelandClinic. News and resources available at newsroom.clevelandclinic.org.

Editor’s Note: Cleveland Clinic News Service is available to provide broadcast-quality interviews and B-roll upon request.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Talk to a Press Contact

Hope Buggey

  • 216.444.8853

Bill Rundle

  • 415.990.3348

[BLOG] Oracle WebLogic: Data Sources (JDBC) & JMS

Online Apps DBA - Thu, 2018-09-27 02:36

A JDBC (Java Database Connectivity) is a Java API which is used to connect to database through a pool of JDBC Connections. JMS Means Java Messaging Service which Accepts messages from Producer Application and delivers them to the consumer application. To know in detail about the Oracle Weblogic Data Sources, visit: https://k21academy.com/weblogic19 where we have […]

The post [BLOG] Oracle WebLogic: Data Sources (JDBC) & JMS appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Node.js node-oracledb: "[POLL] Most Wanted Enhancement"

Christopher Jones - Wed, 2018-09-26 22:08

Danilo Silva, a community member much involved with node-oracledb development, has opened a poll to identify the most requested node-oracledb features.  This will help prioritize our development efforts. 

You can vote here!

Danilo most recently contributed connection pool draining support to node-oracledb, which will be available with node-oracledb 3.0.

Pages

Subscribe to Oracle FAQ aggregator