Feed aggregator

Oracle Achieves DISA Impact Level 5 Provisional Authorization for Oracle Cloud Infrastructure

Oracle Press Releases - Tue, 2019-12-17 07:00
Press Release
Oracle Achieves DISA Impact Level 5 Provisional Authorization for Oracle Cloud Infrastructure U.S. Federal and Department of Defense customers now can benefit from the power of Oracle’s Generation 2 Cloud infrastructure

Redwood Shores, Calif.—Dec 17, 2019

Following closely on the heels of Oracle achieving FedRAMP authorization, Oracle today announced three new government regions: Ashburn, Virginia; Phoenix, Arizona; and Chicago, Illinois. These regions have achieved DISA Impact Level 5 provisional authorization (IL5 PATO), providing a cloud environment where U.S. Department of Defense (DoD) and other Federal customers can harness the power of Oracle Cloud to unlock innovation, improve mission performance, and enhance service delivery.  

This is an important milestone in Oracle’s journey to deliver innovative cloud services with consistent high performance and exceptional security to the entire U.S. government. In 2020, Oracle plans to bring additional full-scale Gen 2 Cloud Regions online to support the classified missions of the US Government.

“U.S. DoD and other Federal customers are continually looking for new, secure ways to improve citizen services and keep our nation safe,” said Don Johnson, executive vice president, Oracle Cloud Infrastructure. “Oracle’s Generation 2 Cloud was engineered to deliver highly secure, high-performance, cost effective infrastructure that helps government organizations address the needs of the nation today and tomorrow.” 

Oracle has been a long-standing strategic technology partner of the U.S. government. Today, more than 500 government organizations take advantage of Oracle’s industry-leading technologies and superior performance. State, local, and federal government customers using Oracle to modernize their technology include Defense Manpower Data Center (DMDC) and the U.S. Air Force.

The Department of Defense recently awarded a contract to Oracle for its Oracle Cloud Infrastructure to support a large portion of the enterprise human resource portfolio. The award modernizes existing infrastructure and will assist the Defense Manpower Data Center in providing necessary human resource services and capabilities to its military members, veterans and their families.

With Oracle Cloud Infrastructure, customers benefit from best-in-class security, consistent high performance, simple predictable pricing, and the tools and expertise needed to bring enterprise workloads to cloud quickly and efficiently. In addition, Oracle now provides organizations with a complete set of solutions for any high performance computing (HPC) workload, enabling businesses to capitalize on the benefits of modern cloud computing while enjoying performance comparable to on-premises at a lower cost.

“Oracle Cloud Infrastructure brings incredible performance, flexibility, security, and cost-savings benefits to our federal civilian, commercial and higher education customers,” said Paul Seifert, Federal Sector President, Mythics, Inc. “Mythics’ DoD customers will now be able to leverage Oracle Cloud to better serve the unique requirements of the DoD at home and abroad.”

Oracle Cloud Infrastructure has achieved certifications and attestations for key security standards and compliance mandates. These independent third-party assurance programs demonstrate Oracle’s commitment to security and to meeting the needs of the public sector. These IL5 PATO government regions will launch with initial Oracle services including VM and Bare Metal Compute (CPU and GPU), Storage (including archive, block, and object storage), Database, Identity and Access Management, Key Management Service, Load Balancer, and Exadata Cloud Service.

“We’re excited to see the Oracle Cloud Infrastructure achieve DISA Impact Level 5 provisional authorization, as it provides additional options that our federal government clients—especially those seeking to migrate large and complex Oracle-based solutions to the cloud—can leverage,” said Anthony Flake, managing director of Accenture Federal Services’ Oracle practice.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Nicole Maloney

  • +1.650.506.0806

Driver Assistance Leader Hits the Road with Oracle

Oracle Press Releases - Tue, 2019-12-17 07:00
Press Release
Driver Assistance Leader Hits the Road with Oracle Agero supports nearly half of passenger vehicles on the road with Oracle SD-WAN

Redwood Shores, Calif.—Dec 17, 2019

Agero, a leader in the digitalization of white-labeled driver assistance services, is using Oracle enterprise communications technologies to safeguard drivers in more than 115 million vehicles in the U.S. When a driver is stranded roadside, time is of the essence. With Oracle, the company has achieved continuous contact center uptime, so customers can get the assistance they need and are back on the road as quickly as possible.
 
For 45 years, Agero has provided smart solutions for its clients and their drivers. Today, Agero’s  roadside assistance, accident management and consumer affairs services are leveraged in the U.S. by drivers in two-thirds of new passenger vehicles, policyholders from nine of the top 15 auto insurance carriers and customers of a variety of other diversified clients. To maximize the quality of its customers’ experiences and to eliminate communication failures or downtime, Agero needed predictable enterprise communications performance and real-time application support. Agero selected Oracle for the failsafe reliability, security and interoperability.
 
“When drivers are stranded on the road in the winter, creating not only an uncomfortable situation but also a health and safety issue, it is crucial for our customers to get directly in contact with an agent,” said Robert Sullivan, vice president, technology and shared services, Agero. “Oracle has the best of breed technology in the Oracle SD-WAN and Enterprise Session Border Controllers, which have helped us to deliver high availability, reliability and quality of experience for our customers.”
 
Since working with Oracle and Presidio to implement the Oracle SD-WAN solution, Agero has drastically reduced downtime and created special routing situations for sites without substantial circuit diversity. This advanced, “always-on performance” is invaluable to Agero as the company processes more than 12 million roadside and emergency support requests per year.
 
“Agero is reinventing how driver assistance is delivered, while elevating the consumer experience in often dire scenarios. As enterprises demand flexible WAN solutions supporting shifting business requirements, Oracle is increasing and leveraging bandwidth for affordable and trusted WAN connectivity, anywhere and whenever it’s needed,” said Andrew Morawski, senior vice president and general manager, Oracle Communications - Networks.
 
In addition to the Oracle SD-WAN, Agero deployed the Oracle 1100 Enterprise Session Border Controller (E-SBC) Oracle 3900 E-SBC and the Oracle Communications Converged Application Server (OCCAS) to protect its network and contact center from external threats.
 
To learn more about Oracle Communications industry solutions, visit: Oracle Communications LinkedIn, or join the conversation at Twitter @OracleComms.
Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Brent Curry
Hill+Knowlton Strategies
+1.312.255.3086
brent.curry@hkstrategies.com
About Agero

Agero’s mission is to safeguard consumers on the road through a unique combination of platform intelligence and human powered solutions, strengthening our clients’ relationships with their drivers. We are a leading provider of driving solutions, including roadside assistance, accident management, consumer affairs and telematics. The company protects 115 million vehicles in partnership with leading automobile manufacturers, insurance carriers and other diversified clients. Managing one of the largest national networks of service providers, Agero responds to more than 12 million requests annually for assistance. Agero, a member company of The Cross Country Group, is headquartered in Medford, Mass., with operations throughout North America. To learn more, visit www.agero.com and follow on Twitter @AgeroNews.

About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Brent Curry

  • +1.312.255.3086

Oracle EBS Cloud Manager: New Release (19.3.1) Is Now Available

Online Apps DBA - Tue, 2019-12-17 02:31

[New Update] Oracle EBS Cloud Manager (EBSCM) version 19.3.1 Now Available: How To Upgrade EBS Cloud Manager is used to Build, Manage & Migrate EBS(R12) on Oracle Cloud. Latest version 19.3.1 of EBS Cloud Manager is now available Check out: https://k21academy.com/ebscloud34 The blog post discusses the: ✦ What is EBS Cloud Manager ✦ What’s new in […]

The post Oracle EBS Cloud Manager: New Release (19.3.1) Is Now Available appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Power BI Report Server – Kerberos Advanced configuration

Yann Neuhaus - Mon, 2019-12-16 12:21
Introduction

Following the basic configuration explained in a previous blog (Link), I describe here how to make more advanced configurations in some specific cases but often met by customers. It is only complementing what has being described in the previous blog in some specific situations needing additional or alternative configurations

Configuration using HTTPS protocol with DNS alias

If you are requested to secure the access to you Power BI Report Server, the solution is to use the HTTPS protocol of course, but first you will need a server certificate installed on your Power BI Report Server host. The aim of this blog is not to explain how to create certificate but just to give your the trick to make it compliant with Kerberos delegation when using a DNS alias for your server.
In that case, the URL used to access your Power BI Report Server portal is based on a DNS alias, you have to generate a certificate with a CN matching your URL but do not forget also to specify an alternative name with type DNS matching also your DNS alias.

After having you certificate issued and installed on your server you can use it for your Web Service URL and your Web Portal URL using the Report Server Configuration manager.

Finally do not forget to create the Http service SPN for the Power BI Report Server service account using your certificate URL.

SetSpn -a http/PowerBIRS.dbi-test.local PBIRSServiceAccount

 

Using Data Sources on SQL Server AlwaysOn with read-only ApplicationIntent

If your report data sources are linked to a SQL Server databases participating in availability groups, with Replica set as read-only, you probably wish your reporting system to read in order to minimize the load on your primary node.
To force the reports to query the data from the read-only replica the parameter ApplicationIntent=ReadOnly is specified in the connection string of the data source (Data Source=;Initial Catalog=;ApplicationIntent=ReadOnly;MultiSubnetFailover=True;Encrypt= True)
In this case a redirection is made by MSSQL Server listener to the dedicated read-only node.
In this context, if you use integrated security using Kerberos, you have to deal with the SPN of the read-only node, reading will be redirected to it
In this case additional SPN must be created on each SQL Server SQL Database Engine instance name (or DNS alias) participating in the availability group targeted. I recommend to create all the involved SPN to cover all case when the roles of your replicas are changing.
To illustrate this case, see the figure below as the SPN’s created for the SQL Sever service account:

SetSPN –a MSSQLSvc/LiDBSrc001:1433 svc_srvsql
SetSPN –a MSSQLSvc/LiDBSrc001.dbi-test.local:1433 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r1 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r1.dbi-test.local:1433 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r2 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r2.dbi-test.local:1433 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r3 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r3.dbi-test.local:1433 svc_srvsql

If you are using constraint delegation, do not forget to add all these services to your Power BI Report Server service account trusting it to allow the delegation to this published services.

Cet article Power BI Report Server – Kerberos Advanced configuration est apparu en premier sur Blog dbi services.

IOT Bug

Jonathan Lewis - Mon, 2019-12-16 09:58

Here’s a worrying bug that showed up a couple of days ago on the Oracle-L mailing list. It’s a problem that I’ve tested against 12.2.0.1 and 19.3.0.0 – it may be present on earlier versions of Oracle. One of the nastiest things about it is that you might not notice it until you get an “out of space” error from the operating system. You won’t get any wrong results from it, but it may well be adding an undesirable performance overhead.

Basically it seems that (under some circumstances, at least) Oracle is setting the “block guess” component of the secondary index on Index Organized Tables (IOTs) to point to blocks in the overflow segment instead of blocks in the primary key segment. As a result, when you execute a query that accesses the IOT through the secondary index and has to do reads from disc to satisfy the query – your session goes through the following steps:

  • Identify index entry from secondary index – acquire “block guess”
  • Read indicated block and discover the object number on the block is wrong, and the block type is wrong
  • Write a (silent) ORA-01410 error and do a block dump into the trace file
  • Use the “logical rowid” from the secondary index (i.e. the stored primary key value) to access the primary key index by key value

So your query runs to completion and you get the right result because Oracle eventually gets there using the primary key component stored in the secondary index, but it always starts with the guess[see sidebar] and for every block you read into the cache because of the guess you get a dump to the trace file.

Here’s a little code to demonstrate. The problem with this code is that everything appears to works perfectly, you have to be able to find the trace file for your session to see what’s gone wrong. First we create some data – this code is largely copied from the original posting on Oracle-L, with a few minor changes:


rem
rem     Script:         iot_bug_12c.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem
rem     Notes
rem     THe OP had tested on 19.5.0.0 to get the same effect, see:
rem     //www.freelists.org/post/oracle-l/IOT-cannot-get-valid-consensus-bug-or-unexplained-behavio
rem

drop table randomload purge;

create table randomload(
        roll number,
        name varchar2(40),
        mark1 number,
        mark2 number,
        mark3 number,
        mark4 number,
        mark5 number,
        mark6 number,
        primary key (roll)
) 
organization index 
including mark3 overflow
;

create index randomload_idx on randomload(mark6);

insert into randomload 
select 
        rownum, 
        dbms_random.string(0,40) name, 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,10000)) 
from 
        dual 
connect by 
        level < 1e5 -- > comment to avoid wordpress format issue
;

commit;

exec dbms_stats.gather_table_stats(null,'randomload', cascade=>true);

prompt  ==================================================
prompt  pct_direct_access should be 100 for randomload_idx
prompt  ==================================================

select 
        table_name, index_name, num_rows, pct_direct_access, iot_redundant_pkey_elim  
from 
        user_indexes
where
        table_name = 'RANDOMLOAD'
;

It should take just a few seconds to build the data set and you should check that the pct_direct_access is 100 for the index called randomload_idx.

The next step is to run a query that will do an index range scan on the secondary index.

 
column mark6 new_value m_6

select 
        mark6, count(*) 
from
        randomload 
group by 
        mark6
order by 
        count(*)
fetch first 5 rows only
;

alter system flush buffer_cache;
alter session set events '10046 trace name context forever, level 8';
set serveroutput off

select avg(mark3) 
from 
        randomload 
where 
        mark6 = &m_6
;

select * from table(dbms_xplan.display_cursor);

alter session set events '10046 trace name context off';
set serveroutput on

I’ve started by selecting one of the least frequencly occuring values of m_6 (a column I know to be in the overflow); then I’ve flushed the buffer cache so that any access I make to the data will have to start with disk reads (the original poster suggested restarting the database at this point, but that’s not necessary).

Then I’ve enabled sql_trace to show wait states (to capture details of what blocks were read and which object they belong to),, and I’ve run a query for m_3 (a column that is in the primary key (TOP) segment of the IOT) and pulled its execution plan from memory to check that the query did use a range scan of the secondary index. Here’s the plan:

----------------------------------------------------------------------------------------
| Id  | Operation          | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                   |       |       |    11 (100)|          |
|   1 |  SORT AGGREGATE    |                   |     1 |     7 |            |          |
|*  2 |   INDEX UNIQUE SCAN| SYS_IOT_TOP_77298 |    10 |    70 |    11   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN| RANDOMLOAD_IDX    |    10 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("MARK6"=1316)
   3 - access("MARK6"=1316)

As you can see the plan shows what we are hoping to see – an index range scan of the secondary index that let’s it follow up with a unique scan of the primary key segment. It’s just a little odd that the access predicate reported for operation 2 (unique scan of TOP) suggests that the access is on a column that isn’t in the primary key and isn’t even in the TOP section.

So the query works and gives the right answer. But what do we find in the trace directory ? If you’re running 12c (possibly only 12.2), each time the error occurs the following pattern of information will be written to the alert log (it didn’t appear in 19.3)


ORCL(3):Hex dump of (file 22, block 16747) in trace file /u01/app/oracle/diag/rdbms/orcl12c/orcl12c/trace/orcl12c_ora_7888.trc
ORCL(3):
ORCL(3):Corrupt block relative dba: 0x0580416b (file 22, block 16747)
ORCL(3):Bad header found during multiblock buffer read (logical check)
ORCL(3):Data in bad block:
ORCL(3): type: 6 format: 2 rdba: 0x0580416b
ORCL(3): last change scn: 0x0000.0b86.0e36484c seq: 0x1 flg: 0x06
ORCL(3): spare3: 0x0
ORCL(3): consistency value in tail: 0x484c0601
ORCL(3): check value in block header: 0x4408
ORCL(3): computed block checksum: 0x0
ORCL(3):

And the following pattern of information is written to the trace file [Update: a follow-up test on 11.2.0.4 suggests that the basic “wrong block address” error also happens in that version of Oracle, but doesn’t result in a dump to the trace file]:


kcbzibmlt:: encounter logical error ORA-1410, try re-reading from other mirror..
cursor valid? 1 makecr 0 line 15461 ds_blk (22, 16747) bh_blk (22, 16747)
kcbds 0x7ff1ca8c0b30: pdb 3, tsn 8, rdba 0x0580416b, afn 22, objd 135348, cls 1, tidflg 0x8 0x80 0x0
    dsflg 0x108000, dsflg2 0x0, lobid 0x0:0, cnt 0, addr 0x0, exf 0x10a60af0, dx 0x0, ctx 0
    whr: 'qeilwh03: qeilbk'
env [0x7ff1ca8e3e54]: (scn: 0x00000b860e364893   xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid:  0x0000.000.00000000  st-scn: 0x0000000000000000  hi-scn: 0x0000000000000000  ma-scn: 0x00000b860e364879  flg: 0x00000660)
BH (0xb1fd6278) file#: 22 rdba: 0x0580416b (22/16747) class: 1 ba: 0xb1c34000
  set: 10 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 763,14
  dbwrid: 0 obj: 135348 objn: 135348 tsn: [3/8] afn: 22 hint: f
  hash: [0x9eff0528,0x77cff808] lru: [0xb1fd2578,0x9ff84658]
  ckptq: [NULL] fileq: [NULL]
  objq: [0xb6f654c0,0x9ff84680] objaq: [0xb6f654d0,0x9ff84690]
  use: [0x77b78128,0x77b78128] wait: [NULL]
  st: READING md: EXCL tch: 0
  flags: only_sequential_access
  Printing buffer operation history (latest change first):
  cnt: 5
  01. sid:10 L122:zgb:set:st          02. sid:10 L830:olq1:clr:WRT+CKT
  03. sid:10 L951:zgb:lnk:objq        04. sid:10 L372:zgb:set:MEXCL
  05. sid:10 L123:zgb:no:FEN          06. sid:10 L083:zgb:ent:fn
  07. sid:08 L192:kcbbic2:bic:FBD     08. sid:08 L191:kcbbic2:bic:FBW
  09. sid:08 L604:bic2:bis:REU        10. sid:08 L190:kcbbic2:bic:FAW
  11. sid:08 L602:bic1_int:bis:FWC    12. sid:08 L822:bic1_int:ent:rtn
  13. sid:08 L832:oswmqbg1:clr:WRT    14. sid:08 L930:kubc:sw:mq
  15. sid:08 L913:bxsv:sw:objq        16. sid:08 L608:bxsv:bis:FBW
Hex dump of (file 22, block 16747)

   ... etc.

Corrupt block relative dba: 0x0580416b (file 22, block 16747)
Bad header found during multiblock buffer read (logical check)
Data in bad block:
 type: 6 format: 2 rdba: 0x0580416b
 last change scn: 0x0000.0b86.0e36484c seq: 0x1 flg: 0x06
 spare3: 0x0
 consistency value in tail: 0x484c0601
 check value in block header: 0x4408
 computed block checksum: 0x0
TRCMIR:kcf_reread     :start:  16747:0:/u01/app/oracle/oradata/orcl12c/orcl/test_8k_assm.dbf
TRCMIR:kcf_reread     :done :  16747:0:/u01/app/oracle/oradata/orcl12c/orcl/test_8k_assm.dbf

The nasty bit, of course, is the bit I’ve removed and replaced with just “etc.”: it’s a complete block dump (raw and symbolic) which in my example was somthing like 500 lines and 35KB in size.

It’s not immediately obvious exactly what’s going on and why, but the 10046 trace helps a little. From another run of the test (on 19.3.0.0) I got the following combination of details – which is an extract showing the bit of the wait state trace leading into the start of the first block dump:

WAIT #140478118667016: nam='db file scattered read' ela= 108 file#=13 block#=256 blocks=32 obj#=77313 tim=103574529210
WAIT #140478118667016: nam='db file scattered read' ela= 2236 file#=13 block#=640 blocks=32 obj#=77313 tim=103574531549
WAIT #140478118667016: nam='db file scattered read' ela= 534 file#=13 block#=212 blocks=32 obj#=77312 tim=103574532257
kcbzibmlt: encounter logical error ORA-1410, try re-reading from other mirror..
cursor valid? 1 warm_up abort 0 makecr 0 line 16082 ds_blk (13, 212) bh_blk (13, 212)

Object 77313 is the secondary index, object 77312 is the primary key index (IOT_TOP). It may seem a little odd that Oracle is using db file scattered reads of 32 blocks to read the indexes but this is a side effect of flushing the buffer – Oracle may decide to prefeetch many extra blocks of an object to “warmup” the cache just after instance startup or a flush of the buffer cache. The thing I want to check, though, is what’s wrong with the blocks that Oracle read from object 77312:


alter system dump datafile 13 block min 212 block max 243;

BH (0xc8f68e68) file#: 13 rdba: 0x034000d4 (13/212) class: 1 ba: 0xc8266000
  set: 10 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xa7fd6c38) file#: 13 rdba: 0x034000d4 (13/212) class: 1 ba: 0xa7c2a000
  set: 12 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xa5f75780) file#: 13 rdba: 0x034000d5 (13/213) class: 0 ba: 0xa5384000
  set: 11 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xdafe9220) file#: 13 rdba: 0x034000d5 (13/213) class: 1 ba: 0xdadcc000
  set: 9 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

I’ve reported the first few lines of the symbolic dump for the first few blocks of the resulting trace file. Look at the third line of each group of three BH lines: it’s reporting object 77311 (the overflow segment), not 77312 (the TOP segment). And every single block reported in the db file scattered read of 32 blocks for object 77312 reports itself, when dumped, as being part of object 77311. And that’s possibly the immediate cause of the ORA-01410.

We can take the investigation a little further by dumping a leaf block or two from the secondary index.


alter session set events 'immediate trace name treedump level 77313';

----- begin tree dump
branch: 0x3400104 54526212 (0: nrow: 542, level: 1)
   leaf: 0x340010d 54526221 (-1: row:278.278 avs:2479)
   leaf: 0x340075e 54527838 (0: row:132.132 avs:5372)
   leaf: 0x34005fb 54527483 (1: row:41.41 avs:7185)

alter system dump datafile 13 block 1886   -- leaf: 0x340075e

BH (0xd5f5d090) file#: 13 rdba: 0x0340075e (13/1886) class: 1 ba: 0xd5158000
  set: 9 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77313 objn: 77313 tsn: [3/6] afn: 13 hint: f
...
row#6[5796] flag: K------, lock: 0, len=18
col 0; len 2; (2):  c1 1d
col 1; len 4; (4):  c3 07 41 5c
tl: 8 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 4]  03 40 05 7c

I’ve done a treedump of the secondary index and picked a leaf block address from the treedump and dumped that leaf block, and from that leaf block I’ve extracted one index entry to show you the three components: the key value (c1 1d), the primary key for the row (c3 07 41 5c), and the block guess (03 40 05 75). Read the block guess as a 4 byte hex number, and it translates to file 13, block 1397 – which should belong to the TOP segment. So the exciting question is – what object does block (13, 1397) think it belongs to ?


alter system dump datafile 13 block 1397;

Block header dump:  0x03400575
 Object id on Block? Y
 seg/obj: 0x12dff  csc:  0x00000b860e308c46  itc: 2  flg: E  typ: 1 - DATA
     brn: 0  bdba: 0x3400501 ver: 0x01 opc: 0
     inc: 0  exflg: 0

Converting from Hex to Decimal: obj: 0x12dff = 77311 which is the overflow segment. The secondary index block guess is pointing at a block in the overflow segment.

There are two ways to handle this problem – you could simply rebuild the index (alter index rebuild) or, as the original poster did, use the “update block references” command to correct all the block guesses: “alter index randomload_idx update block references;”. Neither is desirable, but if you’re seeing a lot of large trace files following the pattern above then it may be necessary.

There was one particular inconsistency in the tests – which I ran many times – occasionally the pct_direct_access for the secondary index would be reported as zero (which, technically, should always happen given the error).  If it did, of course, Oracle wouldn’t follow the guess but would go straight to the step where it used the primary key “logical rowid” – thus bypassing the error and block dump.

tl;dr

In some circumstances the block guesses in the secondary indexes of IOTs may be pointing to the overflow segment instead of the primary key (TOP) segment. If this happens then queries will still run and give the right answers, but whenever they read a “guessed” block from disc they will report an ORA-01410 error and dump a block trace. This will affect performance and may cause space problems at the O/S level.

Sidebar

An entry in the secondary index of an Index Organized Table (IOT) consists of three parts, which intially we can think in the form:

({key-value}, {logical rowid}, {block guess})

Since IOTs don’t have real rowids the “logical rowid” is actually the primary key of the row where the {key value} will be found. As a short cut for efficient execution Oracle includes the block address (4 bytes) where that primary key value was stored when the row was inserted. Because an IOT is an index “rows” in the IOT can move as new data is inserted and leaf blocks split, so eventually any primary key may move to a different block – this is why we refer to the block address as a guess – a few days, hours, or minutes after you’ve inserted the row the block address may no longer be correct.)

To help the runtime engine do the right thing Oracle collects a statistic called pct_direct_access for secondary indexes of IOTs. This is a measure of what percentage of the block guesses are still correct at the time that the statistics are gathered. If this value is high enough the run-time engine will choose to try using the block guesses while executing a query (falling back to using the logical rowid if it turns out that the guess is invalid), but if the value drops too low the optimizer will ignore the block guesses and only use the logical rowid.

Not relevant to this note – but a final point about secondary indexes and logical rowids – if the definition of the index includes  some of the columns from the primary keys Oracle won’t store those columns twice (in more recent version, that is) – the code is clever enough to use the values stored in the (key value) component when it needs to use the (logical rowid) component.

 

OT Footnote

I’ve decided to this year to donate to a charity that works to reduce child mortality rates in Nepal with a two-pronged attack on malnutrition: feeding starving children, then educating their parents on how to make best use local resources to grow the most appropriate crops and use the best preparation methods to  produce nourishing meals in the future. (They also run other projects to improve the lives of the young people in Nepal – here’s a link to their home page, and a direct link to a 4 minute video that gives you a quick insight into what they do and how they do it.)

If you’re thinking of making any small donations to charity over the next few weeks, please think of this one. To make your donation more valuable I’ve set up a justgiving page and will match any donations made, up to a total of £1,000.

Thank you.

 

 

 

Oracle Recognized as a Leader in Gartner Magic Quadrant for Manufacturing Execution Systems for Oracle Manufacturing Cloud

Oracle Press Releases - Mon, 2019-12-16 07:00
Press Release
Oracle Recognized as a Leader in Gartner Magic Quadrant for Manufacturing Execution Systems for Oracle Manufacturing Cloud

Redwood Shores, Calif.—Dec 16, 2019

Oracle SCM Cloud provides end-to-end technology that takes customers beyond supply chain operations and into integrated business planning.

Oracle has been named a Leader in Gartner’s 2019 “Magic Quadrant for Manufacturing Execution Systems1” report. Out of 16 companies evaluated, Oracle is positioned as a Leader based on its completeness of vision and ability to execute for its Oracle Manufacturing Cloud–part of Oracle Supply Chain Management (SCM) Cloud. A complimentary copy of the report is available here.

“Manufacturing organizations are increasingly seeking solutions that extract additional value from manufacturing operations through increased efficiency and reduced costs,” said Andy Binsley, vice president, Manufacturing and ALM Strategy, Oracle. “They would also like to do that on the Cloud. To help our customers achieve these twin goals, we have integrated artificial intelligence, machine learning and Internet of Things capabilities within Oracle Supply Chain Management Cloud. We are pleased to be acknowledged as a Leader for manufacturing execution systems (MES) by Gartner, and see this recognition as a testament to our success in delivering business value to manufacturing organizations, and doing so on the Cloud.”

Gartner estimates that “By 2024, 50% of MES solutions will include industrial IoT (IIoT) platforms synchronized with microservices-based manufacturing operations management (MOM) apps, providing near-real-time transaction management, control, data collection and analytics.”

With Oracle SCM Cloud, Oracle provides a suite of supply chain cloud applications that enable businesses, including manufacturers, to manage their supply chains with the scale, security, innovation, and agility that today’s markets require. Oracle SCM Cloud provides end-to-end technology that takes customers beyond supply chain operations and into integrated business planning.

Oracle SCM Cloud has garnered consistent industry recognition. Oracle was recently named a Leader in both Gartner’s “Magic Quadrant for Warehouse Management Systems2,” and Gartner’s “Magic Quadrant for Transportation Management Systems3.”

1 Gartner, Magic Quadrant for Manufacturing Execution Systems, Rick Franzosa, 29 October 2019
2 Gartner, Magic Quadrant for Warehouse Management Systems, C. Klappich, Simon Tunstall, 8 May 2019
3 Gartner, Magic Quadrant for Transportation Management Systems, Bart De Muynck, Brock Johns, Oscar Sanchez Duran, 27 March 2019

Gartner Disclaimer
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Additional Information
For additional information on Oracle Supply Chain Management (SCM) Cloud, visit FacebookTwitter or the Oracle SCM blog.

Contact Info
Drew Smith
Oracle
+1.415.336.1103
drew.j.smith@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Drew Smith

  • +1.415.336.1103

Clob vs Binary XML storage

Tom Kyte - Sun, 2019-12-15 17:54
Hello Team, While doing poc for storing XML in ClOB storage and Binary XML storage ,I could see storing XML in Binary XML takes less table space as compared to CLOB .As far as I know both store XML in LOB storage.so why there is difference betwee...
Categories: DBA Blogs

ora-24247 when making an https call

Tom Kyte - Sun, 2019-12-15 17:54
Hi, I have a problem when making an https call inside a package. It doesn't appear to recognise the privileges granted to access the acl. When I call utl_http.begin_request in an anonymous plsql block or in a procedure with authid defined as cu...
Categories: DBA Blogs

HA and Failover in Oracle RAC

Tom Kyte - Sun, 2019-12-15 17:54
Hello, Ask Tom Team. I have some many questions about Oracle RAC HA and Failover. I was reading the info in below link and it help me a lot. But I still have some questions. https://asktom.oracle.com/pls/apex/asktom.search?tag=failover-in-rac...
Categories: DBA Blogs

Complex Query

Tom Kyte - Sun, 2019-12-15 17:54
I have a large number of orders (200) involving around 2000 diferent products and need to group the in batches of 6 orders. The task is to identify the best possible groups of orders so performance (human performance) can be maximized. As a start...
Categories: DBA Blogs

Can we call a procedure in select statement with any restriction?

Tom Kyte - Sun, 2019-12-15 17:54
hi tom plz tell me in simple example explanation Can we restrict the function invoke in select statement. Can we call a procedure in select statement with any restriction?
Categories: DBA Blogs

Documentum – NoSuchMethodError on setResourceBundle with D2-REST 16.4 on WebLogic

Yann Neuhaus - Sat, 2019-12-14 02:00

In the scope of an upgrade project, with some colleagues, we have been deploying some D2-REST applications on Kubernetes pods using WebLogic Server. As explained in a previous blog, we first tried to upgrade our D2-REST 4.x into 16.4 but faced a small error. I don’t know if you already used/deployed D2-REST but it seems to me that the deployment is always kinda chaotic. Sometimes you will need to apply some steps and then for the next version it’s not needed anymore but later it will be needed again, aso… So in the end, we always try to deploy the OOTB with some small improvements and whenever we face an error, we try to fix it for this specific version and this version only. Never assume that a fix for a version is good for all versions.

Below, I will be using some scripts and properties files that are present in this “dbi_resources” folder: it’s some utilities and stuff that we are using for automation and to simplify our lives. So we tried to deploy the D2-REST 16.4 on our WebLogic Servers 12.2.1.x:

[weblogic@wsd2rest-0 ~]$ cd $APPLICATIONS
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ wlst_cmd="$ORACLE_HOME/oracle_common/common/bin/wlst.sh"
[weblogic@wsd2rest-0 dbi]$ wlst_script="${dbi_resources}/manageApplication.wls"
[weblogic@wsd2rest-0 dbi]$ domain_prop="${dbi_resources}/domain.properties"
[weblogic@wsd2rest-0 dbi]$ deploy_prop="${dbi_resources}/D2-REST.deploy"
[weblogic@wsd2rest-0 dbi]$ undeploy_prop="${dbi_resources}/D2-REST.undeploy"
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ ${wlst_cmd} ${wlst_script} ${domain_prop} ${deploy_prop}

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: ${dbi_resources}/domain.properties
>>> Loaded the properties file: ${dbi_resources}/D2-REST.deploy
>>> Connected to the AdminServer.
>>> Edit Session started.

<Dec 11, 2019 4:49:19 PM UTC> <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating deploy operation for application, D2-REST [archive: $APPLICATIONS/D2-REST.war], to msD2-REST-02 msD2-REST-01 .>
ERROR... check error messages for cause.
Error occurred while performing activate : Error while Activating changes. : java.lang.NoSuchMethodError: org.apache.log4j.Logger.setResourceBundle(Ljava/util/ResourceBundle;)V
Use dumpStack() to view the full stacktrace :
Problem invoking WLST - Traceback (innermost last):
  File "${dbi_resources}/manageApplication.wls", line 77, in ?
  File "<iostream>", line 569, in stopEdit
  File "<iostream>", line 553, in raiseWLSTException
WLSTException: Error occurred while performing stopEdit : Cannot call stopEdit without an edit session in progress

[weblogic@wsd2rest-0 dbi]$

 

At this point, the application has been deployed but it cannot be started properly. It will therefore be stuck in “New” status on the WebLogic side. On the D2-REST log file, the error message looks like this:

2019-12-11 16:49:57,112 UTC [ERROR] ([ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)') - o.springframework.web.context.ContextLoader   : Context initialization failed
java.lang.NoSuchMethodError: org.apache.log4j.Logger.setResourceBundle(Ljava/util/ResourceBundle;)V
        at com.documentum.fc.common.DfLogger.<clinit>(DfLogger.java:622) ~[dfc-16.4.jar:na]
        at com.documentum.fc.common.impl.logging.LoggingConfigurator.onPreferencesInitialized(LoggingConfigurator.java:178) ~[dfc-16.4.jar:na]
        at com.documentum.fc.common.DfPreferences.initialize(DfPreferences.java:71) ~[dfc-16.4.jar:na]
        at com.documentum.fc.common.DfPreferences.getInstance(DfPreferences.java:43) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfSimpleDbor.getDefaultDbor(DfSimpleDbor.java:78) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfSimpleDbor.<init>(DfSimpleDbor.java:66) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfClient$ClientImpl.<init>(DfClient.java:350) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfClient.<clinit>(DfClient.java:766) ~[dfc-16.4.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.getDfcVersion(WebAppContextInitializer.java:104) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.collectInfo(WebAppContextInitializer.java:81) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.preloadAppEnvironment(WebAppContextInitializer.java:67) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.initialize(WebAppContextInitializer.java:38) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.initialize(WebAppContextInitializer.java:31) ~[_wl_cls_gen.jar:na]
        at org.springframework.web.context.ContextLoader.customizeContext(ContextLoader.java:482) ~[spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:442) ~[spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325) ~[spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) [spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at weblogic.servlet.internal.EventsManager$FireContextListenerAction.run(EventsManager.java:705) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:328) [com.oracle.weblogic.security.subject.jar:12.2.1.3]
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197) [com.oracle.weblogic.security.subject.jar:12.2.1.3]
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.EventsManager.executeContextListener(EventsManager.java:251) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.EventsManager.notifyContextCreatedEvent(EventsManager.java:204) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.EventsManager.notifyContextCreatedEvent(EventsManager.java:192) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppServletContext.preloadResources(WebAppServletContext.java:1921) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppServletContext.start(WebAppServletContext.java:3106) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1843) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppModule.start(WebAppModule.java:884) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.application.internal.ExtensibleModuleWrapper$StartStateChange.next(ExtensibleModuleWrapper.java:360) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.ExtensibleModuleWrapper$StartStateChange.next(ExtensibleModuleWrapper.java:356) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:45) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.ExtensibleModuleWrapper.start(ExtensibleModuleWrapper.java:138) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleListenerInvoker.start(ModuleListenerInvoker.java:124) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:233) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:228) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:45) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:78) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.StartModulesFlow.activate(StartModulesFlow.java:52) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.BaseDeployment$2.next(BaseDeployment.java:752) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:45) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.BaseDeployment.activate(BaseDeployment.java:262) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.SingleModuleDeployment.activate(SingleModuleDeployment.java:52) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.DeploymentStateChecker.activate(DeploymentStateChecker.java:165) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(AppContainerInvoker.java:90) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.activate(AbstractOperation.java:631) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.activateDeployment(ActivateOperation.java:171) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doCommit(ActivateOperation.java:121) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.commit(AbstractOperation.java:348) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentCommit(DeploymentManager.java:907) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentManager.activateDeploymentList(DeploymentManager.java:1468) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentManager.handleCommit(DeploymentManager.java:459) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.commit(DeploymentServiceDispatcher.java:181) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doCommitCallback(DeploymentReceiverCallbackDeliverer.java:217) [com.oracle.weblogic.deploy.service.jar:12.2.1.3]
        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$100(DeploymentReceiverCallbackDeliverer.java:14) [com.oracle.weblogic.deploy.service.jar:12.2.1.3]
        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$2.run(DeploymentReceiverCallbackDeliverer.java:69) [com.oracle.weblogic.deploy.service.jar:12.2.1.3]
        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:678) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352) [com.bea.core.utils.full.jar:12.2.1.3]
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337) [com.bea.core.utils.full.jar:12.2.1.3]
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57) [com.oracle.weblogic.work.jar:12.2.1.3]
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:360) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]

 

The solution is quite simple, it’s just a conflict for the log4j jars that comes with the OOTB war file provided by OpenText… So you just need to undeploy the application, remove the conflict and redeploy it afterwards. If you are facing the error above, then it’s linked to the “log4j-over-slf4j” jar file and you can solve it like that:

[weblogic@wsd2rest-0 dbi]$ ${wlst_cmd} ${wlst_script} ${domain_prop} ${undeploy_prop}

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: ${dbi_resources}/domain.properties
>>> Loaded the properties file: ${dbi_resources}/D2-REST.undeploy
>>> Connected to the AdminServer.
>>> Edit Session started.

<Dec 11, 2019 4:54:46 PM UTC> <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating undeploy operation for application, D2-REST [archive: null], to msD2-REST-01 msD2-REST-02 .>

Current Status of your Deployment:
Deployment command type: undeploy
Deployment State : completed
Deployment Message : no message
None

>>> Execution completed.
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ jar -tvf D2-REST.war | grep "WEB-INF/lib/log4j"
481535 Tue Jan 05 05:02:00 UTC 2016 WEB-INF/lib/log4j-1.2.16.jar
 12359 Mon Dec 12 03:29:02 UTC 2016 WEB-INF/lib/log4j-over-slf4j-1.6.1.jar
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ zip -d D2-REST.war WEB-INF/lib/log4j-over-slf4j*
deleting: WEB-INF/lib/log4j-over-slf4j-1.6.1.jar
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ ${wlst_cmd} ${wlst_script} ${domain_prop} ${deploy_prop}

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: ${dbi_resources}/domain.properties
>>> Loaded the properties file: ${dbi_resources}/D2-REST.deploy
>>> Connected to the AdminServer.
>>> Edit Session started.

<Dec 11, 2019 4:56:05 PM UTC> <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating deploy operation for application, D2-REST [archive: $APPLICATIONS/D2-REST.war], to msD2-REST-02 msD2-REST-01 .>

Current Status of your Deployment:
Deployment command type: deploy
Deployment State : completed
Deployment Message : no message
None

>>> Execution completed.
[weblogic@wsd2rest-0 dbi]$

 

As you can see above, the D2-REST 16.4 is now successfully deployed. You can access it and work with it without any issues.

[weblogic@wsd2rest-0 dbi]$ curl -s -k https://lb_url/D2-REST/product-info | python -mjson.tool
{
    "links": [
        {
            "href": "https://lb_url/D2-REST/product-info",
            "rel": "self"
        }
    ],
    "name": "documentum-rest-services-product-info",
    "properties": {
        "build_number": "0511",
        "major": "16.4",
        "minor": "0000",
        "product": "Documentum D2 REST Services",
        "product_version": "16.4.0000.0511",
        "revision_number": "NA"
    }
}
[weblogic@wsd2rest-0 dbi]$

 

Cet article Documentum – NoSuchMethodError on setResourceBundle with D2-REST 16.4 on WebLogic est apparu en premier sur Blog dbi services.

Documentum – Cast trouble with D2-REST 16.5.x on WebLogic

Yann Neuhaus - Sat, 2019-12-14 02:00

In the scope of an upgrade project, with some colleagues, we have been deploying some D2-REST applications on Kubernetes pods using WebLogic Server. At the beginning, we started using D2-REST 16.4 and that was working properly (once the issue described here is fixed (and some others linked to FIPS 140-2, aso…)). After that, we tried to switch to higher versions (16.5.0 Pxx, 16.5.1 P00 or P04) but it stopped working with some error. We were able to replicate the issue with WebLogic Server 12.2.1.3 and 12.2.1.4 so it’s not just specific to one small use case but it seems more global to the D2-REST 16.5.x versions on WebLogic. It might impact other Application Servers as well, that would need some testing.

Upon accessing the D2-REST URL (E.g.: https://lb_url/D2-REST), the service seemed to be working but while going further on the product information page for example (E.g.: https://lb_url/D2-REST/product-info), then the following error was always displayed:

<error xmlns="http://identifiers.emc.com/vocab/documentum" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <status>500</status>
  <code>E_INTERNAL_SERVER_ERROR</code>
  <message>An internal server error occurs.</message>
  <details>
    org.owasp.esapi.reference.DefaultSecurityConfiguration cannot be cast to com.emc.d2.web.security.D2SecurityConfiguration
  </details>
  <id>51872a76-g47f-4d6e-9d47-e9fa5d8c1291</id>
</error>

 

The error generated on the D2-REST logs at that time was:

java.lang.ClassCastException: org.owasp.esapi.reference.DefaultSecurityConfiguration cannot be cast to com.emc.d2.web.security.D2SecurityConfiguration
	at com.emc.d2.web.security.D2HttpUtilities.getHeader(D2HttpUtilities.java:40)
	at com.emc.documentum.d2.rest.filter.AppInfoFilter.getRemoteAddr(AppInfoFilter.java:82)
	at com.emc.documentum.d2.rest.filter.AppInfoFilter.doFilter(AppInfoFilter.java:36)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.security.filter.RepositoryNamingFilter.doFilter(RepositoryNamingFilter.java:40)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.filter.RestCorsFilter.doFilterInternal(RestCorsFilter.java:47)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.filter.CompressionFilter.doFilter(CompressionFilter.java:73)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.log.MessageLoggingFilter.doFilter(MessageLoggingFilter.java:69)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.security.filter.ExceptionHandlerFilter.doFilterInternal(ExceptionHandlerFilter.java:31)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3797)
	at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3763)
	at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:344)
	at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
	at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
	at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
	at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2451)
	at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2299)
	at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2277)
	at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1720)
	at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1680)

 

Since we couldn’t find anything obvious, we opened an OpenText Support case (#4322241). There is a KB (KB14050670) around Internal Server Error but it didn’t help us in this case. After some research on OpenText side, it seems that this is a known issue and there is a solution for it but it is not documented at the moment: that’s the whole purpose of this blog. The thing is that the solution is going to be in the next version of the D2FS REST Services Development Guide and therefore if you are looking into the OpenText Support Site, you won’t find anything related to this error yet. Don’t ask me why it will be in the Development guide, maybe they didn’t find another suitable location.

So the solution is very simple, you just have to add a small piece into the D2-REST web.xml file:

[weblogic@wsd2rest-0 ~]$ cd $APPLICATIONS
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ jar -xvf D2-REST.war WEB-INF/web.xml
 inflated: WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ cat WEB-INF/web.xml
<?xml version="1.0" encoding="UTF-8"?>

<web-app xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
         http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
         version="3.0"
         metadata-complete="true">

  <display-name>D2-REST</display-name>
  <description>D2-REST</description>
  <error-page>
    <error-code>404</error-code>
    <location>/errors/redirect/404</location>
  </error-page>
  <error-page>
    <error-code>500</error-code>
    <location>/errors/redirect/500</location>
  </error-page>
</web-app>
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ sed -i 's,</web-app>,  <listener>\n&,' WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$ sed -i 's,</web-app>,    <listener-class>com.emc.d2.rest.context.WebAppContextListener</listener-class>\n&,' WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$ sed -i 's,</web-app>,  </listener>\n&,' WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ cat WEB-INF/web.xml
<?xml version="1.0" encoding="UTF-8"?>

<web-app xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
         http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
         version="3.0"
         metadata-complete="true">

  <display-name>D2-REST</display-name>
  <description>D2-REST</description>
  <error-page>
    <error-code>404</error-code>
    <location>/errors/redirect/404</location>
  </error-page>
  <error-page>
    <error-code>500</error-code>
    <location>/errors/redirect/500</location>
  </error-page>
  <listener>
    <listener-class>com.emc.d2.rest.context.WebAppContextListener</listener-class>
  </listener>
</web-app>
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ jar -uvf D2-REST.war WEB-INF/web.xml
adding: WEB-INF/web.xml(in = 753) (out= 326)(deflated 56%)
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ rm -rf WEB-INF/
[weblogic@wsd2rest-0 dbi]$

 

As you can see above, it’s all about adding a new listener into the web.xml file for the “WebAppContextListener“. This class – based on its name – has absolutely nothing to do with the error shown above and yet, adding this listener will solve the cast issue. So just redeploy/update your Application in WebLogic and that’s it, the issue should be gone.

 

Cet article Documentum – Cast trouble with D2-REST 16.5.x on WebLogic est apparu en premier sur Blog dbi services.

Database Link to 9.2 Database from 19c

Bobby Durrett's DBA Blog - Fri, 2019-12-13 15:12

I have mentioned in previous posts that I am working on migrating a large 11.2 database on HP Unix to 19c on Linux. I ran across a database link to an older 9.2 database in the current 11.2 database. That link does not work in 19c so I thought I would blog about my attempts to get it to run in 19c. It may not be that useful to other people because it is a special case, but I want to remember it for myself if nothing else.

First, I’ll just create test table in my own schema on a 9.2 development database:

SQL> create table test as select * from v$version;

Table created.

SQL> 
SQL> select * from test;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
PL/SQL Release 9.2.0.5.0 - Production
CORE	9.2.0.6.0	Production
TNS for HPUX: Version 9.2.0.5.0 - Production
NLSRTL Version 9.2.0.5.0 - Production

Next, I will create a link to this 9.2 database from a 19c database. I will hide the part of the link creation that has my password and the database details, but they are not needed.

SQL> create database link link_to_92
... removed for security reasons ...

Database link created.

SQL> 
SQL> select * from test@link_to_92;
select * from test@link_to_92
                   *
ERROR at line 1:
ORA-03134: Connections to this server version are no longer supported.

So I looked up ways to get around the ORA-03134 error. I can’t remember all the things I checked but I have a note that I looked at this one link: Resolving 3134 errors. The idea was to create a new database link from an 11.2 database to a 9.2 database. Then create a synonym on the 11.2 database for the table I want on the 9.2 system. Here is what that looks like on my test databases:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
... removed for brevity ...

SQL> create database link link_from_112
... removed for security ...

Database link created.

SQL> create synonym test for test@link_from_112;

Synonym created.

SQL> 
SQL> select * from test;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production

Now that I have the link and synonym on the 11.2 middleman database, I go back to the 19c database and create a link to the 11.2 database and query the synonym to see the original table:

SQL> select * from v$version;

BANNER                                                                           ...
-------------------------------------------------------------------------------- ...
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production           ...
...										    

SQL> create database link link_to_112
...

Database link created.
...
SQL> select * from v$version@link_to_112;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
...

SQL> select * from test@link_to_112;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production

So far so good. I am not sure how clear I have been, but the point is that I could not query the table test on the 9.2 database from a 19c database without getting an error. By jumping through an 11.2 database I can now query from it. But, alas, that is not all my problems with this remote 9.2 database table.

When I first started looking at these remote 9.2 tables in my real system, I wanted to get an execution plan of a query that used them. The link through an 11.2 database trick let me query the tables but not get a plan of the query.

SQL> truncate table plan_table;

Table truncated.

SQL> 
SQL> explain plan into plan_table for
  2  select * from test@link_to_112
  3  /

Explained.

SQL> 
SQL> set markup html preformat on
SQL> 
SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'ADVANCED'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------
Error: cannot fetch last explain plan from PLAN_TABLE

SQL> 
SQL> select object_name from plan_table;

OBJECT_NAME
------------------------------------------------------------------------------

TEST

Kind of funky but not the end of the world. Only a small number of queries use these remote 9.2 tables so I should be able to live without explain plan. Next, I needed to use the remote table in a PL/SQL package. For simplicity I will show using it in a proc:

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test@link_to_112;
  9  
 10  END BOBBYTEST ;
 11  /

Warning: Procedure created with compilation errors.

SQL> SHOW ERRORS;
Errors for PROCEDURE BOBBYTEST:

LINE/COL ERROR
-------- -----------------------------------------------------------------
6/3      PL/SQL: SQL Statement ignored
6/3      PL/SQL: ORA-00980: synonym translation is no longer valid

I tried creating a synonym for the remote table but got the same error:

SQL> create synonym test92 for test@link_to_112;

...

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test92;
  9  
 10  END BOBBYTEST ;
 11  /

Warning: Procedure created with compilation errors.

SQL> SHOW ERRORS;
Errors for PROCEDURE BOBBYTEST:

LINE/COL ERROR
-------- -----------------------------------------------------------------
6/3      PL/SQL: SQL Statement ignored
6/3      PL/SQL: ORA-00980: synonym translation is no longer valid

Finally, by chance I found that I could use a view for the remote synonym and the proc would compile:

SQL> create view test92 as select * from test@link_to_112;

View created.

...

SQL> CREATE OR REPLACE PROCEDURE BOBBYTEST
  2  AS
  3  ver_count number;
  4  
  5  BEGIN
  6    SELECT
  7    count(*) into ver_count
  8    FROM test92;
  9  
 10  END BOBBYTEST ;
 11  /

Procedure created.

SQL> SHOW ERRORS;
No errors.
SQL> 
SQL> execute bobbytest;

PL/SQL procedure successfully completed.

SQL> show errors
No errors.

Now one last thing to check. Will the plan work with the view?

SQL> explain plan into plan_table for
  2  select * from test92
  3  /

Explained.

SQL> select * from table(dbms_xplan.display('PLAN_TABLE',NULL,'ADVANCED'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------
Error: cannot fetch last explain plan from PLAN_TABLE

Sadly, the view was not the cure all. So, here is a summary of what to do if you have a procedure on a 19c database that needs to access a table on a 9.2 database:

  • Create a link on a 11.2 database to the 9.2 database
  • Create a synonym on the 11.2 database pointing to the table on the 9.2 database
  • Create a link on the 19c database to the 11.2 database
  • Create a view on the 19c database that queries the synonym on the 11.2 database
  • Use the view in your procedure on your 19c database
  • Explain plans may not work with SQL that use the view

Bobby

Categories: DBA Blogs

Updating the trail file location for Oracle GoldenGate Microservices

DBASolved - Fri, 2019-12-13 09:21

When you first install Oracle GoldenGate Microservices, you may have taken the standard installation approach and all the configuration, logging and trail file information will reside in a standard directory structure.  This makes the architecture of your enviornment really easy.   Let’s say you want to identify what trail files are being used by the […]

The post Updating the trail file location for Oracle GoldenGate Microservices appeared first on DBASolved.

Categories: DBA Blogs

Q2 FY20 GAAP EPS UP 14% TO $0.69 and NON-GAAP EPS UP 12% TO $0.90

Oracle Press Releases - Thu, 2019-12-12 15:00
Press Release
Q2 FY20 GAAP EPS UP 14% TO $0.69 and NON-GAAP EPS UP 12% TO $0.90 Fusion ERP Cloud Revenue Up 37%; Autonomous Database Cloud Revenue Up >100%

Redwood Shores, Calif.—Dec 12, 2019

Oracle Corporation (NYSE: ORCL) today announced fiscal 2020 Q2 results. Total Revenues were $9.6 billion, up 1% in USD and in constant currency compared to Q2 last year. Cloud Services and License Support revenues were $6.8 billion, while Cloud License and On-Premise License revenues were $1.1 billion.

GAAP Operating Income was up 3% to $3.2 billion, and GAAP Operating Margin was 33%. Non-GAAP Operating Income was $4.0 billion, and non-GAAP Operating Margin was 42%. GAAP Net Income was $2.3 billion, and non-GAAP Net Income was $3.0 billion. GAAP Earnings Per Share was up 14% to $0.69, while non-GAAP Earnings Per Share was up 12% to $0.90.

Short-term deferred revenues were $8.1 billion. Operating Cash Flow was $13.8 billion during the trailing twelve months.

“We had another strong quarter in our Fusion and NetSuite cloud applications businesses with Fusion ERP revenues growing 37% and NetSuite ERP revenues growing 29%,” said Oracle CEO, Safra Catz. “This consistent rapid growth in the now multibillion dollar ERP segment of our cloud applications business has enabled Oracle to deliver a double-digit EPS growth rate year-after-year. I fully expect we will do that again this year.”

“It’s still early days, but the Oracle Autonomous Database already has thousands of customers running in our Gen2 Public Cloud,” said Oracle CTO, Larry Ellison. “Currently, our Autonomous Database running in our Public Cloud business is growing at a rate of over 100%. We expect that growth rate to increase dramatically as we release our Autonomous Database running on our Gen2 Cloud@Customer into our huge on-premise installed base over the next several months.”

The Board of Directors also declared a quarterly cash dividend of $0.24 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on January 9, 2020, with a payment date of January 23, 2020.

Q2 Fiscal 2020 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q2 results and fiscal 2020 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Passcode: 4597628.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE:ORCL), visit us at www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

“Safe Harbor” Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding the growth of our earnings per share and our Autonomous Database business, are "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our success depends upon our ability to develop new products and services, integrate acquired products and services and enhance our existing products and services. (2) Our cloud strategy, including our Oracle Software-as-a-Service and Infrastructure-as-a-Service offerings, may adversely affect our revenues and profitability. (3) We might experience significant coding, manufacturing or configuration errors in our cloud, license and hardware offerings. (4) If the security measures for our products and services are compromised and as a result, our customers' data or our IT systems are accessed improperly, made unavailable, or improperly modified, our products and services may be perceived as vulnerable, our brand and reputation could be damaged, the IT services we provide to our customers could be disrupted, and customers may stop using our products and services, all of which could reduce our revenue and earnings, increase our expenses and expose us to legal claims and regulatory actions. (5) Our business practices with respect to data could give rise to operational interruption, liabilities or reputational harm as a result of governmental regulation, legal requirements or industry standards relating to consumer privacy and data protection. (6) Economic, political and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) Our international sales and operations subject us to additional risks that can adversely affect our operating results. (8) Acquisitions present many risks and we may not achieve the financial and strategic goals that were contemplated at the time of a transaction. A detailed discussion of these factors and other risks that affect our business is contained in our SEC filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of December 12, 2019. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Documentum – MigrationUtil – 5 – Change Installation Owner

Yann Neuhaus - Thu, 2019-12-12 12:59

The Documentum installation owner is the operating system user that owns the server executable and other related files along with the OS process when the server is running. It is originally determined when the server is installed, in fact, it is the logged-in user that performed the Documentum installation. Of course, it is preferable to install Documentum and never change the installation owner. However, sometimes company policy change and dictates that the original installation must be changed, for example, the user name does not conform to a new naming policy.

This blog is the last one of the MigrationUtil blogs serie (please find links of other blogs below), I will show how to change the Installation Owner using the MigrationUtil. If you want to change only the password please read this blog.

The installation owner user is important at the operating system and in the Docbase/Content Server level, it is given the following privileges:

  • Operating System:
    – Rights to start Documentum Services such as Docbase, Docbroker, Java Method Server and other installed Documentum products.
    – Permission to change the Content Server configuration (i.e. upgrade, create, and delete docbases).
    – Folder level permission to view data, configuration, and many log files located under the DOCUMENTUM_HOME directory.
  • Docbase and Content Server:
    – Superuser and System Administrator rights.
    – Set as the r_install_owner value in the dm_server_config object.
    – Set as the operating system user to run several Administrative jobs.

As you can deduce, the change of the installer owner is not a minor change within Documentum, so it is very critical. That’s why you have to prepare very well this operation and determine the best approach to execute it.

Below two change levels to be done:

  • OS Level change:
    – Create the new install owner at the operating system level, it should correspond to the user_os_name of the new docbase user.
  • Docbase level change:
    – Create a new user in the docbase to be the installation owner and reassign the previous installation owner’s objects to the new user. The MigrationUtil will be able to this part.
Preparation Before any change
  • Clean the environment:
    Run the Consistency Checker job: The report gives you a list of bad data within your system. Cleaning up inconsistent data before making the change will speed up the process.
    Purge all old log files: Changing the installation owner requires updating permissions on Documentum data and log files. The purge will reduce work on “unneeded data” and will greatly speed up the process.
  • Back up:
    – Back up all the impacted environment before performing any major change within Documentum (Content Server files and the Database).
Create new user – OS Level

Add the new installation user at the OS Level, in the same group as the actual installation user :

[root@vmtestdctm01 ~]# useradd -g 1004 dmdocbase1
[root@vmtestdctm01 ~]# passwd dmdocbase1
Changing password for user dmdocbase1.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@vmtestdctm01 ~]# 

To know the group of actual installation user:

[root@vmtestdctm01 ~]# cat /etc/passwd
...
dmadmin:x:1002:1004::/home/dmadmin:/bin/bash
Create new user – Docbase Level

You need to create the user in all docbases, using below dql query :

CREATE dm_user OBJECT
SET user_name = 'dmdocbase1',
SET user_password = 'install164',
SET user_login_name = 'dmdocbase1',
SET user_address = 'dmdocbase1@dbi-services.com',
SET description = 'This User is the owner of docbase1',
SET user_os_name = 'dmdocbase1',
SET client_capability = 8,
SET user_privileges = 16,
SET user_xprivileges = 56,
SET user_source = 'inline password'

Result:

object_created  
----------------
1101e24080000500
Configure the MigrationUtil

Adapt the MigrationUtil configuration file like below:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/config.xml 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">docbase1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

...

<entry key="ChangeInstallOwner">yes</entry>
<entry key="InstallOwner">dmadmin</entry>
<entry key="NewInstallOwner">dmdocbase1</entry>
<entry key="NewInstallOwnerPassword">install164</entry>
...
Migration Stop Docbase(s) and Docbroker(s)

Before you execute the migration you have to stop the docbase(s) and the docbroker(s).

$DOCUMENTUM/dba/dm_shutdown_Docbase1
$DOCUMENTUM/dba/dm_stop_DocBroker
Execute the migration script

Once every thing stopped, you can execute the migration script:

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Changes...

Skipping Host Name Change...

Changing Install Owner...
Created new log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/InstallOwnerChange.log
Finished changing Install Owner...Please check log file for more details/errors
Finished changing Install Owner...

Skipping Server Name Change...

Skipping Docbase Name Change...

Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

Have a look on the Migration log:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/InstallOwnerChange.log
Start: 2019-12-12 05:14:37.191
Changing Install Owner
=====================
InstallOwner: dmadmin
New InstallOwner: dmdocbase1
Changing InstallOwner for docbase: Docbase1
Retrieving server.ini path for docbase: Docbase1
Found path: /app/dctm/product/16.4/dba/config/Docbase1/server.ini

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:Docbase1
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Processing Database Changes for docbase: Docbase1
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/InstallOwnerChange_Docbase1_DatabaseRestore.sql'
Processing _s table...
select r_object_id,object_name from dm_sysobject_s where object_name = 'dmadmin'
update dm_sysobject_s set object_name = 'dmdocbase1' where r_object_id = '0c01e24080000105'
select r_object_id,r_install_owner from dm_server_config_s where r_install_owner = 'dmadmin'
update dm_server_config_s set r_install_owner = 'dmdocbase1' where r_object_id = '3d01e24080000102'
select r_object_id,user_name from dm_user_s where user_name = 'dmadmin'
update dm_user_s set user_name = 'dmdocbase1' where r_object_id = '1101e24080000102'
select r_object_id,user_os_name from dm_user_s where user_os_name = 'dmadmin'
update dm_user_s set user_os_name = 'dmdocbase1' where r_object_id = '1101e24080000102'
...
update dm_workflow_r set r_last_performer = 'dmdocbase1' where r_last_performer = 'dmadmin'
update dm_workflow_s set r_creator_name = 'dmdocbase1' where r_creator_name = 'dmadmin'
update dm_workflow_s set supervisor_name = 'dmdocbase1' where supervisor_name = 'dmadmin'
Successfully updated database values...
Committing all database operations...
Finished processing database changes for docbase: Docbase1

Processing server.ini changes for docbase: Docbase1
Backed up '/app/dctm/product/16.4/dba/config/Docbase1/server.ini' to '/app/dctm/product/16.4/dba/config/Docbase1/server.ini_install_dmadmin.backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/Docbase1/server.ini
Updating acs.properties for docbase: Docbase1
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_install_dmadmin.backup'
Updated acs.properties: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_Docbase1' to '/app/dctm/product/16.4/dba/dm_shutdown_Docbase1_install_dmadmin.backup'
Updated shutdown script: /app/dctm/product/16.4/dba/dm_shutdown_Docbase1
...
Processing Services File Changes...
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml_install_dmadmin.backup'
Updated web.xml: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml
WARNING...File /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/bpm.ear/bpm.war/WEB-INF/web.xml doesn't exist
No need to update method server startup script: /app/dctm/product/16.4/wildfly9.0.1/server/startMethodServer.sh
Finished processing File changes...

Finished changing Install Owner...
End: 2019-12-12 05:14:39.815
Change permissions

Change the permissions of all folders and files under DOCUMENTUM_HOME directory, if your content storage directories are not located under the DOCUMENTUM_HOME directory, change the permissions on each content storage directory as well.

[root@vmtestdctm01 ~]$ chown -R dmdocbase1 /app/dctm
Start Docbase(s) and Docbroker(s)

Start the Docbroker and the docbase:

$DOCUMENTUM/dba/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dba/dm_start_Docbase1
Post Migration

Check the docbase logs:

...
2019-12-12T05:30:09.982774      13301[13301]    0000000000000000        [DM_MQ_I_DAEMON_START]info:  "Message queue daemon (pid : 13570, session 0101e24080000456) is started sucessfully."
2019-12-12T05:30:20.255917      13569[13569]    0101e24080000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (vmtestdctm01) with port (1490).  Information: (Config(Docbase1), Proximity(1), Status(Open), Dormancy Status(Active))."
Wed Dec 12 05:30:32 2019 [INFORMATION] [AGENTEXEC 13628] Detected during program initialization: Version: 16.4.0000.0248  Linux64
Wed Dec 12 05:30:35 2019 [INFORMATION] [AGENTEXEC 13628] Detected during program initialization: Agent Exec connected to server Docbase1:  [DM_SESSION_I_SESSION_START]info:  "Session 0101e24080000500 started for user dmdocbase1."

Try to connect with old installation owner to Docbase1 throw idql:

...
Connecting to Server using docbase Docbase1
Could not connect
[DM_SESSION_E_AUTH_FAIL]error:  "Authentication failed for user dmadmin with docbase Docbase1."

This is the expected behavior, the old installation owner is no more active.

The environment I used to make this test is a very simple one ( with only one Docbase, no HA, no FullText, aso) and created only for this purpose. It worked fine on my environment, but be careful if you have a more complex environment!

Cet article Documentum – MigrationUtil – 5 – Change Installation Owner est apparu en premier sur Blog dbi services.

SQL Server – Collecting last backup information in an AlwaysOn environment

Yann Neuhaus - Thu, 2019-12-12 09:02
Introduction

Sometimes you face interesting challenges with unusual environment. One of my customer needed a automated and flexible backup solution. Said like that nothing very complex you will say. But if I mention that some databases were 60TB big with more than 30 filegroups and around 600 database data files each and moreover synchronized in an AlwayOn availability group, it is not the same story and you can easily imagine that working with standard backup strategy will not be viable. Therefore I was working on implementing solution using partial full, partial differential and read-only filegroups backups to minimize the time needed.
Well this post is not explaining the whole solution, but only a way to collect the last backup information of my databases, especially for the ones being in an AlwaysOn availability group and which filegroup states changed.
If you already worked with partial backups and read-only filegroups backups you know that the backup sequence is very important, but if you don’t you will quickly notice it if you need to restore, and you can easily understand why this last backup information is crucial. As the backups always have to run on the primary replica, you have to collect the information on all replicas if failover occurred and the primary changed to ensure that you execute the right backups at the right moment and not make unnecessary backups (remember the data volumes).

 

Explanation of the solution and code

Another thing to mentioned, because of security policies, it was forbidden to use linked server, but hopefully xp_CmdShell was possible. I wanted each replica to work independently, and needed a way to query the remote replicas to collect the last backup information on each SQL Server instances involved. Because backup history might be cleans, I need to store this information in local tables. I created 2 tables, one to stored last database backups information and the other to store last read-only filegroup backups information. Additionally I created 2 tables to collect temporarily the information coming from all replicas.

Creation of the last backup information tables:

--########################################################
--###Backup generator - backup last date info temporary table
--########################################################

USE [<YourDatabaseName>]
GO
/*
if OBJECT_ID('[dbo].[bakgen_backuplastdt_databases_temp]') is not null
	drop table [dbo].[bakgen_backuplastdt_databases_temp]
*/
create table [dbo].[bakgen_backuplastdt_databases_temp] (
	ServerName sysname not null,
	SqlInstanceName sysname  not null,
	SqlServerName sysname  not null,
	ServiceBrokerGuid uniqueidentifier not null,
	DatabaseCreationDate datetime  not null,
	DatabaseName sysname  not null,
	BackupType char(1) not null,
	LastBackupDate datetime  not null,
	LastBackupSize numeric(20,0) not null,
	is_primary bit null,
	insertdate datetime  not null
)
GO
create unique clustered index idx_bakgen_backuplastdt_databases_temp on [dbo].[bakgen_backuplastdt_databases_temp](DatabaseCreationDate,DatabaseName,BackupType,ServerName,SqlInstanceName)



--########################################################
--###Backup generator - backup last date info
--########################################################

USE [<YourDatabaseName>]
GO
/*
if OBJECT_ID('[dbo].[bakgen_backuplastdt_databases]') is not null
	drop table [dbo].[bakgen_backuplastdt_databases]
*/
create table [dbo].[bakgen_backuplastdt_databases] (
	ServerName sysname  not null,
	SqlInstanceName sysname  not null,
	SqlServerName sysname  not null,
	ServiceBrokerGuid uniqueidentifier not null,
	DatabaseCreationDate datetime  not null,
	DatabaseName sysname  not null,
	BackupType char(1) not null,
	LastBackupDate datetime  not null,
	LastBackupSize numeric(20,0) not null,
	is_primary bit null,
	insertdate datetime  not null
)
GO
create unique clustered index idx_bakgen_backuplastdt_databases on [dbo].[bakgen_backuplastdt_databases](DatabaseCreationDate,DatabaseName,BackupType,ServerName,SqlInstanceName)

I finally decided to work with a stored procedure calling a PowerShell scripts to remotely execute the queries on the replicas.
The stored procedure lists the existing replicas and collects the last database backup information, then the read-only filegroup backup information creating 2 different queries to execute locally on the server and store the data in the temp tables first. It will create similar queries, excluding the databases not involved in availability groups and execute them on the remote replicas using xp_CmdShell running PowerShell scripts. The PowerShell scripts are dynamically created using the TSQL queries generated. They used one function of the well-known DBATools. So you will have to install it first.
You will notice that in order to log the scripts generated are nicely formatted in order to read and debug them easier. But before executing you PowerShell script through xp_CmdShell you need to apply some string formatting like the 2 lines I added to avoid the execution to fail:

set @PSCmd = replace(replace(@PSCmd, nchar(13), N”), nchar(10), N’ ‘)
set @PSCmd = replace(@PSCmd, ‘>’, N’^>’)

Do not forget to escape some characters, otherwise the execution will fails, in my case omitting to escape the ‘>’ sign raise an “Access is denied” message in the output of the xp_CmdShell execution.

After that the code is comparing what has been collected in the temp tables with the final information and update information if needed.

Here is the complete code of the stored procedure:

use [<YourDatabaseName>]
if OBJECT_ID('dbo.bakgen_p_getbakinfo') is not null
            drop procedure dbo.bakgen_p_getbakinfo 
go

CREATE PROCEDURE dbo.bakgen_p_getbakinfo 
AS
/************************************
*   dbi-services SA, Switzerland    *
*   http://www.dbi-services.com        *
*************************************
    Group/Privileges..: DBA
    Script Name......:       bakgen_p_getbakinfo.sql
    Author...........:          Christophe Cosme
    Date.............:           2019-09-20
    Version..........:          SQL Server 2016 / 2017
    Description......:        Get the backup information locally but also on the replica involved

    Input parameters.: 

            Output parameter: 
                                               
    Called by........:         Stored Procdedure : [dbo].[bakgen_p_bakexe]
************************************************************************************************
    Historical
    Date        Version    Who    Whats                  Comments
    ----------  -------    ---    --------    -----------------------------------------------------
    2019-09-30  1.0        CHC    Creation
************************************************************************************************/ 
BEGIN 

BEGIN TRY
            
            set nocount on

            declare 
    @ErrorMessage  NVARCHAR(4000), 
    @ErrorSeverity INT, 
    @ErrorState    INT;

            declare @ModuleName sysname,
                                    @ProcName sysname,
                                    @InfoLog nvarchar(max),
                                    @Execute char(1)
                        
            set @ModuleName = 'BakGen'
            set @ProcName = OBJECT_NAME(@@PROCID)
            set @Execute = 'A'

            set @InfoLog = 'Retrieve backup information'
            execute dbo.bakgen_p_log       
                        @ModuleName = @ModuleName,
                        @ProcedureName = @ProcName,
                        @ExecuteMode = @Execute,
                        @LogType = 'INFO',
                        @DatabaseName = null,
                        @Information = @InfoLog,
                        @Script = null


            --###variable to store error message
            declare @errmsg varchar(4000)
            --###variable with the current datetime
            declare @cdt datetime = getdate()

            --###variabler to store the sql and powershell commands to execute
            declare @sqllocalDB nvarchar(4000),
                                    @sqllocalFG nvarchar(4000),
                                    @sqlremoteDB nvarchar(4000),
                                    @sqlremoteFG nvarchar(4000),
                                    @PSCmd nvarchar(4000)

            --###variable to store the local SQL server name
            declare @LocalSqlServerName sysname
            --###variable to store the list of replicas
            declare @TAgReplica table (AgReplicaName sysname)
            --###variable for the cursors
            declare @AgReplicaName sysname

            --###set the local SQL Server name
            set @LocalSqlServerName = lower(convert(sysname,serverproperty('ServerName')))
                        

            --############################################################################
            --### check if tables exist
            --############################################################################
            if object_id('[dbo].[bakgen_backuplastdt_databases_temp]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_databases_temp]' 
                        raiserror (@errmsg,11,1);
            end
            if object_id('[dbo].[bakgen_backuplastdt_fgreadonly_temp]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_fgreadonly_temp]' 
                        raiserror (@errmsg,11,1);                      
            end

            if object_id('[dbo].[bakgen_backuplastdt_databases]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_databases]' 
                        raiserror (@errmsg,11,1);
            end
            if object_id('[dbo].[bakgen_backuplastdt_fgreadonly]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_fgreadonly]' 
                        raiserror (@errmsg,11,1);                      
            end


            
            --############################################################################
            --### select the replicas involved adding first the local server
            --############################################################################
            insert into @TAgReplica (AgReplicaName ) select @LocalSqlServerName

            --###check if alwayson feature is activated
            if (serverproperty('IsHadrEnabled') = 1)
            begin
                        insert into @TAgReplica (AgReplicaName )
                        select lower(agr.replica_server_name) from sys.availability_replicas agr
                                    where agr.replica_server_name <> @LocalSqlServerName
            end


            --############################################################################
            --### construct the SQL command to execute on the local SQL Server
            --############################################################################
            set @sqllocalDB = ''
            set @sqllocalDB +='

            declare @Tbi table (
                        ServerName sysname,
                        SqlInstanceName sysname,
                        SqlServerName sysname,
                        ServiceBrokerGuid uniqueidentifier,
                        DatabaseCreationDate datetime,
                        DatabaseName sysname,
                        BackupType char(1),
                        LastBackupDate datetime,
                        is_primary bit null,
                        insertdate datetime       
            )


            insert into @Tbi (
                        [ServerName],
                        [SqlInstanceName],
                        [SqlServerName],
                        [ServiceBrokerGuid],
                        [DatabaseCreationDate],
                        [DatabaseName],
                        [BackupType],
                        [LastBackupDate],
                        [is_primary],
                        [insertdate])
            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        max(bs.backup_finish_date) as LastBackupDate,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary,
                        ''' + convert(varchar,@cdt,120) + '''   
            from msdb.dbo.backupset bs
                        inner join sys.databases db on db.name = bs.database_name
                        where bs.type in (''D'',''I'',''P'',''Q'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (-1,0,1)
                        group by
                                    db.service_broker_guid,
                                    db.create_date,
                                    bs.database_name,
                                    bs.type, 
                                    sys.fn_hadr_is_primary_replica(bs.database_name)

            insert into [dbo].[bakgen_backuplastdt_databases_temp] (
                        [ServerName],
                        [SqlInstanceName],
                        [SqlServerName],
                        [ServiceBrokerGuid],
                        [DatabaseCreationDate],
                        [DatabaseName],
                        [BackupType],
                        [LastBackupDate],
                        [LastBackupSize],
                        [is_primary],
                        [insertdate])
            select  
                        t.[ServerName],
                        t.[SqlInstanceName],
                        t.[SqlServerName],
                        t.[ServiceBrokerGuid],
                        t.[DatabaseCreationDate],
                        t.[DatabaseName],
                        t.[BackupType],
                        t.[LastBackupDate],
                        bs.[backup_size],
                        t.[is_primary],
                        t.[insertdate]
            from @Tbi t
                        inner join msdb.dbo.backupset bs on 
                                    bs.backup_finish_date = t.LastBackupDate  
                                    and bs.database_name collate database_default = t.DatabaseName collate database_default
                                    and bs.type collate database_default = t.BackupType collate database_default
'




            set @sqllocalFG = ''
            set @sqllocalFG +='

            insert into [dbo].[bakgen_backuplastdt_fgreadonly_temp]
           ([ServerName],
           [SqlInstanceName],
           [SqlServerName],
                           [ServiceBrokerGuid],
                           [DatabaseCreationDate],
           [DatabaseName],
           [BackupType],
           [filegroup_name],
           [file_logicalname],
           [filegroup_guid],
           [file_guid],
           [LastBackupDate],
                           [LastBackupReadOnlyLsn],
           [is_primary],
                           [insertdate])
            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        bf.filegroup_name,
                        bf.logical_name as file_logicalname,
                        bf.filegroup_guid,
                        bf.file_guid,
                        max(bs.backup_finish_date) as LastBackupDate,
                        max(bf.read_only_lsn) as LastBackupReadOnlyLsn,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary, 
                        ''' + convert(varchar,@cdt,120) + '''   
            from msdb.dbo.backupset bs
                                    inner join msdb.dbo.backupfile bf on  bf.backup_set_id = bs.backup_set_id
                                    inner join sys.databases db on db.name = bs.database_name 
                        where 
                                    bs.backup_finish_date >= db.create_date 
                                    and bs.type in (''F'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (-1,0,1)
                                    and bf.is_present = 1
                                    and bf.is_readonly = 1
                                    and bf.file_type = ''D''
                        group by
                                    db.service_broker_guid,
                                    db.create_date,
                                    bs.database_name, 
                                    bs.type,
                                    bf.filegroup_name,
                                    bf.logical_name, 
                                    bf.filegroup_guid,
                                    bf.file_guid,
                                    sys.fn_hadr_is_primary_replica(bs.database_name)
'


            
            --############################################################################
            --### construct the SQL command to execute on the remote SQL Server
            --############################################################################
            set @sqlremoteDB = ''
            set @sqlremoteDB +='

            declare @Tbi table (
                        ServerName sysname,
                        SqlInstanceName sysname,
                        SqlServerName sysname,
                        ServiceBrokerGuid uniqueidentifier,
                        DatabaseCreationDate datetime, 
                        DatabaseName sysname,
                        BackupType char(1),
                        LastBackupDate datetime,
                        is_primary bit null,
                        insertdate datetime       
            )

            insert into @Tbi (
                        [ServerName],
                        [SqlInstanceName],
                        [SqlServerName],
                        [ServiceBrokerGuid],
                        [DatabaseCreationDate],
                        [DatabaseName],
                        [BackupType],
                        [LastBackupDate],
                        [is_primary],
                        [insertdate])
            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        max(bs.backup_finish_date) as LastBackupDate,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary, 
                        ''' + convert(varchar,@cdt,120) + '''     
            from msdb.dbo.backupset bs
                        inner join sys.databases db on db.name = bs.database_name 
                        where bs.type in (''D'',''I'',''P'',''Q'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (0,1)
                        group by
                                    db.service_broker_guid,
                                    db.create_date,
                                    bs.database_name,
                                    bs.type,
                                    sys.fn_hadr_is_primary_replica(bs.database_name) 

            select  
                        t.[ServerName],
                        t.[SqlInstanceName],
                        t.[SqlServerName],
                        t.[ServiceBrokerGuid],
                        t.[DatabaseCreationDate],
                        t.[DatabaseName],
                        t.[BackupType],
                        t.[LastBackupDate],
                        bs.[backup_size],
                        t.[is_primary],
                        t.[insertdate]
            from @Tbi t
                        inner join msdb.dbo.backupset bs on 
                                    bs.backup_finish_date = t.LastBackupDate 
                                    and bs.database_name collate database_default = t.DatabaseName collate database_default
                                    and bs.type collate database_default = t.BackupType collate database_default

'

            set @sqlremoteFG = ''
            set @sqlremoteFG +='

            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        bf.filegroup_name,
                        bf.logical_name as file_logicalname,
                        bf.filegroup_guid,
                        bf.file_guid,
                        max(bs.backup_finish_date) as LastBackupDate,
                        max(bf.read_only_lsn) as LastReadOnlyLsn,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary, 
                        ''' + convert(varchar,@cdt,120) + '''   
            from msdb.dbo.backupset bs
                                    inner join msdb.dbo.backupfile bf on  bf.backup_set_id = bs.backup_set_id
                                    inner join sys.databases db on db.name = bs.database_name 
                        where 
                                    bs.backup_finish_date >= db.create_date 
                                    and bs.type in (''F'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (0,1)
                                    and bf.is_present = 1
                                    and bf.is_readonly = 1
                                    and bf.file_type = ''D''
                        group by
                                    db.service_broker_guid,
                                    db.create_date, 
                                    bs.database_name, 
                                    bs.type,
                                    bf.filegroup_name,
                                    bf.logical_name, 
                                    bf.filegroup_guid,
                                    bf.file_guid,
                                    sys.fn_hadr_is_primary_replica(bs.database_name) 
'

            --############################################################################
            --### delete all records in the backup info tables
            --############################################################################
            delete from [dbo].[bakgen_backuplastdt_databases_temp]
            delete from [dbo].[bakgen_backuplastdt_fgreadonly_temp]

            --############################################################################
            --### loop for all replicas involved
            --############################################################################
            declare cur_replica cursor
            static local forward_only
            for 
                        select AgReplicaName
                        from @TAgReplica
                 
            open cur_replica
            fetch next from cur_replica into 
                        @AgReplicaName                    


            while @@fetch_status = 0
            begin 
                                    
                        if @LocalSqlServerName = @AgReplicaName
                        begin 

                                    set @InfoLog = 'Get database backup information on local SQL Server instance ' + QUOTENAME(@AgReplicaName)
                                    execute dbo.bakgen_p_log       
                                               @ModuleName = @ModuleName,
                                               @ProcedureName = @ProcName,
                                                @ExecuteMode = @Execute,
                                               @LogType = 'INFO',
                                               @DatabaseName = null,
                                               @Information = @InfoLog,
                                               @Script = @sqllocalDB
                                    execute sp_executesql @sqllocalDB

                                    set @InfoLog = 'Get read-only filegroup backup information on local SQL Server instance ' + QUOTENAME(@AgReplicaName)
                                    execute dbo.bakgen_p_log       
                                               @ModuleName = @ModuleName,
                                               @ProcedureName = @ProcName,
                                               @ExecuteMode = @Execute,
                                               @LogType = 'INFO',
                                               @DatabaseName = null,
                                               @Information = @InfoLog,
                                               @Script = @sqllocalFG
                                    execute sp_executesql @sqllocalFG

                        end 
                        else
                        begin
                                    --############################################################################
                                    --### construct the PowerShell command to execute on the remote SQL Server
                                    --############################################################################
                                    set @PSCmd  = ''
                                    set @PSCmd += 'PowerShell.exe '
                                    set @PSCmd += '-Command "'
                                    set @PSCmd += '$qrydb = \"' + @sqlremoteDB + '\"; ' 
                                    set @PSCmd += '$qryfg = \"' + @sqlremoteFG + '\"; ' 
                                    set @PSCmd += '$rdb = Invoke-DbaQuery -SqlInstance ' + @AgReplicaName + ' -Query $qrydb; '
                                    set @PSCmd += '$rfg = Invoke-DbaQuery -SqlInstance ' + @AgReplicaName + ' -Query $qryfg; '
                                    set @PSCmd += 'if ($rdb -ne $null) { '
                                    set @PSCmd += 'Write-DbaDbTableData -SqlInstance ' + @LocalSqlServerName + ' -Database ' + db_name() + ' -Schema dbo -Table bakgen_backuplastdt_databases_temp -InputObject $rdb;'
                                    set @PSCmd += '} '
                                    set @PSCmd += 'if ($rfg -ne $null) { '
                                    set @PSCmd += 'Write-DbaDbTableData -SqlInstance ' + @LocalSqlServerName + ' -Database ' + db_name() + ' -Schema dbo -Table bakgen_backuplastdt_fgreadonly_temp -InputObject $rfg;'
                                    set @PSCmd += '} '
                                    set @PSCmd += '"'

                                    set @InfoLog = 'Get backup information on replica SQL Server instance ' + QUOTENAME(@AgReplicaName) + ' executing master..xp_cmdshell PowerShell script'
                                    execute dbo.bakgen_p_log       
                                               @ModuleName = @ModuleName,
                                               @ProcedureName = @ProcName,
                                               @ExecuteMode = @Execute,
                                               @LogType = 'INFO',
                                               @DatabaseName = null,
                                               @Information = @InfoLog,
                                               @Script = @PSCmd

                                    --###remove CRLF for xp_cmdshell and PowerShell 
                                    set @PSCmd = replace(replace(@PSCmd, nchar(13), N''), nchar(10), N' ')
                                    set @PSCmd = replace(@PSCmd, '>', N'^>')
                                    --###Execute the powershell command on the replica and store the result in the temporary tables
                                    exec master..xp_cmdshell @PSCmd
                        end
                        
                        fetch next from cur_replica into 
                                    @AgReplicaName                    


            end
            close cur_replica
            deallocate cur_replica


            --############################################################################
            --### Update and insert backup information in final tables
            --############################################################################

            --###Update first the database creation date with the local ones
            Update t
                        set t.DatabaseCreationDate = db.create_date
            from [dbo].[bakgen_backuplastdt_databases_temp] t
                        inner join sys.databases db 
                                    on db.name collate database_default = t.DatabaseName collate database_default 
                                               and db.service_broker_guid = t.ServiceBrokerGuid

            Update t
                        set t.DatabaseCreationDate = db.create_date
            from [dbo].[bakgen_backuplastdt_fgreadonly_temp] t
                        inner join sys.databases db 
                                    on db.name collate database_default = t.DatabaseName collate database_default 
                                               and db.service_broker_guid = t.ServiceBrokerGuid




            BEGIN TRY

                        begin transaction 

                        delete f
                                    from [dbo].[bakgen_backuplastdt_databases_temp] t
                                               inner join [dbo].[bakgen_backuplastdt_databases] f 
                                                           on f.DatabaseCreationDate = t.DatabaseCreationDate
                                                                       and f.DatabaseName = t.DatabaseName 
                                                                       and f.BackupType = t.BackupType 
                                                                       and f.ServerName = t.ServerName 
                                                                       and t.SqlInstanceName = f.SqlInstanceName
                                               where f.LastBackupDate < t.LastBackupDate

                        Insert into [dbo].[bakgen_backuplastdt_databases] (
                                    ServerName,
                                    SqlInstanceName,
                                    SqlServerName,
                                    DatabaseCreationDate,
                                    DatabaseName,
                                    BackupType,
                                    LastBackupDate,
                                    LastBackupSize,
                                    is_primary,
                                    insertdate 
                        )
                        select 
                                    t.ServerName,
                                    t.SqlInstanceName,
                                    t.SqlServerName,
                                    t.DatabaseCreationDate,
                                    t.DatabaseName,
                                    t.BackupType,
                                    t.LastBackupDate,
                                    t.LastBackupSize,
                                    t.is_primary,
                                    t.insertdate 
                                    from [dbo].[bakgen_backuplastdt_databases_temp] t
                                               where not exists (select 1 from [dbo].[bakgen_backuplastdt_databases] f 
                                                                                                                      where f.DatabaseName = t.DatabaseName 
                                                                                                                                  and f.BackupType = t.BackupType 
                                                                                                                                  and f.ServerName = t.ServerName 
                                                                                                                                  and t.SqlInstanceName = f.SqlInstanceName)
                                    
                        
                        commit

                        begin transaction

                        delete f
                                    from [dbo].[bakgen_backuplastdt_fgreadonly_temp] t
                                               inner join [dbo].[bakgen_backuplastdt_fgreadonly] f 
                                                           on f.DatabaseName = t.DatabaseName 
                                                                       and f.BackupType = t.BackupType 
                                                                       and f.filegroup_name = t.filegroup_name
                                                                       and f.ServerName = t.ServerName 
                                                                       and f.SqlInstanceName = t.SqlInstanceName
                                               where f.LastBackupDate < t.LastBackupDate


                        Insert into [dbo].[bakgen_backuplastdt_fgreadonly] (
                                    ServerName,     
                                    SqlInstanceName,
                                    SqlServerName,           
                                    DatabaseCreationDate,
                                    DatabaseName,            
                                    BackupType,
                                    filegroup_name,
                                    file_logicalname,          
                                    filegroup_guid, 
                                    file_guid,          
                                    LastBackupDate,          
                                    LastBackupReadOnlyLsn,
                                    is_primary,
                                    insertdate                     
                        )
                        select 
                                    t.ServerName,   
                                    t.SqlInstanceName,
                                    t.SqlServerName,
                                    t.DatabaseCreationDate,
                                    t.DatabaseName,          
                                    t.BackupType,
                                    t.filegroup_name,
                                    t.file_logicalname,        
                                    t.filegroup_guid,           
                                    t.file_guid,        
                                    t.LastBackupDate,        
                                    t.LastBackupReadOnlyLsn,
                                    t.is_primary,
                                    t.insertdate                   
                        from [dbo].[bakgen_backuplastdt_fgreadonly_temp] t                                        
                                    where not exists (
                                               select 1 from  [dbo].[bakgen_backuplastdt_fgreadonly] f 
                                               where f.DatabaseName = t.DatabaseName 
                                                                       and f.BackupType = t.BackupType 
                                                                       and f.filegroup_name = t.filegroup_name
                                                                       and f.ServerName = t.ServerName 
                                                                       and t.SqlInstanceName = f.SqlInstanceName)

                        
                        commit
            END TRY
            BEGIN CATCH
                SELECT 
                                    @ErrorMessage = ERROR_MESSAGE(), 
                                    @ErrorSeverity = ERROR_SEVERITY(), 
                                    @ErrorState = ERROR_STATE();

                        IF @@TRANCOUNT > 0
                                    ROLLBACK
                        
                        raiserror(@ErrorMessage, @ErrorSeverity, @ErrorState);

            END CATCH



RETURN;

END TRY
BEGIN CATCH
    SELECT 
        @ErrorMessage = ERROR_MESSAGE(), 
        @ErrorSeverity = ERROR_SEVERITY(), 
        @ErrorState = ERROR_STATE();

            set @InfoLog = '@ErrorState = ' + convert(nvarchar, @ErrorState) + '/@ErrorSeverity = ' + convert(nvarchar, @ErrorSeverity) + '/@ErrorMessage = ' + @ErrorMessage
            execute dbo.bakgen_p_log       
                        @ModuleName = @ModuleName,
                        @ProcedureName = @ProcName,
                        @ExecuteMode = @Execute,
                        @LogType = 'ERROR',
                        @DatabaseName = null,
                        @Information = @InfoLog,
                        @Script = null

    raiserror(@ErrorMessage, @ErrorSeverity, @ErrorState);
END CATCH;

RETURN
END
Other Objects needed

As mentioned above I used the DBATools Write-DbaDbTableData function, so need to install it before being able to run the above stored procedure.

I share also the 2 other objects used in the above stored procedure, but of course you can adapt the code to your needs

Creation of the log table:

--########################################################
--###Backup generator - logs
--########################################################

USE [<YourDatabaseName>]
GO
/*
if OBJECT_ID('[dbo].[bakgen_logs]') is not null
	drop table [dbo].[bakgen_logs]
*/
create table [dbo].[bakgen_logs] (
	id bigint identity(1,1) not null,
	LogDate datetime,
	SqlServerName sysname,
	ModuleName sysname,
	ProcedureName sysname,
	ExecuteMode char(1),
	LogType nvarchar(50),
	DatabaseName sysname null,
	Information nvarchar(max) null,
	Scripts nvarchar(max) null,
CONSTRAINT [PK_bakgen_logs] PRIMARY KEY CLUSTERED 
(
	[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
)
GO

Creation of the stored procedure to write the logs:

use [<YourDatabaseName>]
if OBJECT_ID('dbo.bakgen_p_log') is not null
	drop procedure dbo.bakgen_p_log 
go

CREATE PROCEDURE dbo.bakgen_p_log 
(
	@ModuleName sysname,
	@ProcedureName sysname,
	@ExecuteMode char(1),
	@LogType nvarchar(50),
	@DatabaseName sysname = null,
	@Information nvarchar(max) =  null,
	@Script nvarchar(max)  = null
)

AS
/************************************
*   dbi-services SA, Switzerland    *
*   http://www.dbi-services.com        *
*************************************
    Group/Privileges..: DBA
    Script Name......:	bakgen_p_log.sql
    Author...........:	Christophe Cosme
    Date.............:	2019-09-20
    Version..........:	SQL Server 2016 / 2017
    Description......:	write information to the log table to keep trace of the step executed

    Input parameters.: 

	Output parameter: 
				
************************************************************************************************
    Historical
    Date        Version    Who    Whats		Comments
    ----------  -------    ---    --------	-----------------------------------------------------
    2019-10-14  1.0        CHC    Creation
************************************************************************************************/ 
BEGIN 

BEGIN TRY
	
	--###variable to store error message
	declare @errmsg varchar(4000)

	if OBJECT_ID('[dbo].[bakgen_logs]') is null
	begin
		set @errmsg = 'bakgen_p_log : table not found - be sure the table exists'
		set @errmsg += '	table name = [dbo].[bakgen_logs]' 
		raiserror (@errmsg,11,1);
	end		

	insert into [dbo].[bakgen_logs] (
		LogDate,
		SqlServerName,
		ModuleName,
		ProcedureName,
		ExecuteMode,
		LogType,
		DatabaseName,
		Information,
		Scripts
		)
	values(
		getdate(),
		convert(sysname,SERVERPROPERTY('servername')),
		@ModuleName,
		@ProcedureName,
		@ExecuteMode,
		@LogType,
		@DatabaseName,
		@Information,
		@Script
		)


RETURN;

END TRY
BEGIN CATCH
	declare 
    @ErrorMessage  NVARCHAR(4000), 
    @ErrorSeverity INT, 
    @ErrorState    INT;
    SELECT 
        @ErrorMessage = ERROR_MESSAGE(), 
        @ErrorSeverity = ERROR_SEVERITY(), 
        @ErrorState = ERROR_STATE();
 
    -- return the error inside the CATCH block
    raiserror(@ErrorMessage, @ErrorSeverity, @ErrorState);
END CATCH;

RETURN
END
Conclusion

Triggering PowerShell from a stored procedure did the trick for my special case and is very practical. But to find the right syntax to make the script running through xp_CmdShell was not so trivial. I admit to spend sometimes to figure out what was causing the issue.
But I definitely enjoyed the solution for retrieving information outside the local SQL Server instance.

Cet article SQL Server – Collecting last backup information in an AlwaysOn environment est apparu en premier sur Blog dbi services.

Design Thinking in IT Development life cycle

OracleApps Epicenter - Thu, 2019-12-12 04:07
Here is question from a reader Does integrate design thinking help into day to day operations of IT and Service Management ? Is there any frameworks or matured models of Design thinking embedded into development cycle? Design thinking at its very core is about understanding your customer’s needs better than they can ever articulate it […]
Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator