Feed aggregator

Is there a risk in continuing to develop using plsql web toolkit?

Tom Kyte - Mon, 2017-09-25 09:26
Hi Tom, I hope you can help or point me in the right direction. I have been asked to evaluate an in-house developed system that is written entirely in plsql/web toolkit using mod_plsql. This is a heavily used service, sometimes with very high conc...
Categories: DBA Blogs

String buffer into Oracle (utl, xml)

Tom Kyte - Mon, 2017-09-25 09:26
<code>I have procedure to create xml file and save it on disc directory: create or replace procedure test_write_xml_data_to_file (p_directory varchar2, p_file_name varchar2) as v_file UTL_FILE.FILE_TYPE; v_amount INTEGER := 32767; v_xml_dat...
Categories: DBA Blogs

Error while inserting data through SQL Loader

Tom Kyte - Mon, 2017-09-25 09:26
Hi All, We are using SQL loader in our .Net application to insert data into oracle Data base. We are inserting through bulk insert process. We have another Stored procedure in Oracle which deletes data from the same table where insertion happe...
Categories: DBA Blogs

Using Interval Datatype

Tom Kyte - Mon, 2017-09-25 09:26
I have the following query where TS_END and TS_START are DATE columns. I used to run this query to give me the average response time in seconds: SELECT COUNT(*) system_process_count, TRUNC(AVG(lgrt.ts_end - lgrt.ts_start)*24*60*60, 1) FRO...
Categories: DBA Blogs

Chartis Names Oracle Category Leader for Financial Crime Risk Management

Oracle Press Releases - Mon, 2017-09-25 08:00
Press Release
Chartis Names Oracle Category Leader for Financial Crime Risk Management Oracle is only company identified by Chartis as a category leader across all five proficiencies including anti-money laundering

Redwood Shores Calif—Sep 25, 2017

Oracle Financial Crime and Compliance Management (FCCM) has been named a category leader for solutions in Anti-Money Laundering (AML), Enterprise Fraud, Know Your Customer (KYC), Watchlist Monitoring and Trader Surveillance by Chartis Research. Oracle is the only company to be identified by Chartis as a category leader in all five financial crime risk management categories.
 
“Our placement on the Chartis Research report re-affirms our status as a best-in-class provider with an unmatched array of services, and is reflective of the meticulous attention we place on our FCCM solutions,” said Oracle Financial Services Analytical Applications Group Vice President and General Manager Ambreesh Khanna. “We are at a time of renewed focus on anti-money laundering precautions and we take great pride in assuaging customers’ concerns through exemplary offerings. Oracle offers an un-paralleled, comprehensive solution set for financial crime risk management systems and we look forward to further strengthening this toolset.”
 
“I’m extremely pleased that our intense focus on improving detection effectiveness and the efficiency of case investigations is being validated in the industry,” said Oracle Financial Crime and Compliance Management Vice President Sunil Mathew. “There is a great new interest in applying cutting-edge detection technologies such as Machine Learning and Graph Analytics across all of these risk areas and having the strength of Oracle’s technology R&D behind our applications team has helped differentiate our approach in the marketplace."
 
The Chartis Financial Crime Risk Management Systems Market Update 2017 analyzes an environment that faces a significant level of disruption from new vendors, many of which are exploiting new technologies. The Chartis methodology takes into account the full breadth of risk management tools truly needed to run an enterprise-wide solution, taking a holistic view of the technology, tools and techniques used.
 
The report also features the Chartis RiskTech QuadrantTM to better illustrate vendor landscape. This quadrant utilizes a comprehensive methodology formed through in-depth independent research and a scoring system that explains which technology solutions best meet an organization’s needs. It provides an exhaustive and in-depth look at the best AML, fraud technology, KYC, watchlist monitoring and trader surveillance solutions available in today’s marketplace, with Oracle ranking as a category leader in every section.
 
“Oracle demonstrates strong, across-the-board leadership, emerging as a category leader in all five risk categories we investigated,” said Sid Dash, Research Director at Chartis Research. “They demonstrated a strong and well-focused set of product functionality and technological capability, along with impressive breadth and depth of services and delivery capacity, backed up by a strong support infrastructure.”
 
Download an executive summary of the Chartis Financial Crime Risk Management Systems Market Update 2017 Report here.
Contact Information
Alex Moriconi
Oracle
+1-650-607-6598
 
About Oracle
The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at http://cloud.oracle.com.
 
Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
 
About Chartis
Chartis is the leading provider of research and analysis on the global market for risk technology, and is part of InfoPro Digital. Chartis's goal is to support enterprises as they drive business performance through better risk management, corporate governance and compliance, and to help clients make informed technology and business decisions by providing in-depth analysis and actionable advice on virtually all aspects of risk technology.
 
RiskTech Quadrant®, RiskTech100® and FinTech QuadrantTM are registered trademarks of Chartis Research (http://www.chartis-research.com).

Solaris : How to Change/Rename Guest Domain Name (LDOM Name)

Online Apps DBA - Mon, 2017-09-25 02:46

Steps Save the current guest domain configuration into an XML file: Shutdown and un-bind the Guest LDOM: Copy the XML file as the new LDOM XML file and edit file to replace the Old Name to New Name: Destroy the Old Guest LDOM: Create the new Guest LDOM using the edited XML file: Bind and […]

The post Solaris : How to Change/Rename Guest Domain Name (LDOM Name) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Cloud Announcement: More Automation. Lower Prices.

This week Larry Ellison, Executive Chairman and CTO of Oracle, announced two new programs designed to increase flexibility and automation in the way customers can purchase and consume their Oracle...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How to create Teradata Server Dashboard

Nilesh Jethwa - Sun, 2017-09-24 15:10

Teradata delivers better business outcomes through technology-enabled solutions in the areas that matter most ? from operational excellence and asset optimization, to customer experience and product innovation, to finance transformation and risk mitigation. It works with leading businesses in over 75 countries worldwide including many of the top performers and best-known brands in telecom, transportation, consumer packaged goods, financial services and manufacturing.

Are you using Teradata Server for your data marts or data-warehouse? If so, build your Free Teradata Server infocaptor dashboard software.

Read more at http://www.infocaptor.com/ice-database-connect-dashboard-to-teradata-sql

how the SQL query is processed

Tom Kyte - Sun, 2017-09-24 15:06
How the SQL query is processed in detail
Categories: DBA Blogs

Range constraint

Tom Kyte - Sun, 2017-09-24 15:06
Hi! First i want to thank you for the many Responses that where usefull for me. I need a constraint or trigger that controls if the row inserted or updated contains a range of numbers that is already contained in another row of the same table. ...
Categories: DBA Blogs

oracle plsql data migration

Tom Kyte - Sun, 2017-09-24 15:06
Hi Tom, i need to write a plsql program, Suppose there are 10 sup tables with different columns in each table and 1 target table which have columns same as combined all source columns. i need to write a procedure which will extracts the data from...
Categories: DBA Blogs

temp undo sizing

Tom Kyte - Sun, 2017-09-24 15:06
Hi team, I have One question - 1. Suppose i have 1TB DB Size so What our recommendation in term of -sizing of database memory. -Temporary tablespace -Undo tablespace Thanks
Categories: DBA Blogs

Convert sys_guid to number

Tom Kyte - Sun, 2017-09-24 15:06
Hi Tom I read some articles about to_number(sys_guid()), but You suggest not to use it. Why Apex QuickSQL suggests to_number(sys_guid()) for unique object-ID ? Example create or replace trigger country_biu before insert or update ...
Categories: DBA Blogs

Am I a DBA 3.0 or just an SQL*DBA?

Yann Neuhaus - Sun, 2017-09-24 13:47

There are currently a lot of new buzz words and re-namings which suggest that our DBA role is changing, most of them escorted with a #cloud hashtag. Oracle Technology Network is now called Oracle Developer Community. Larry Ellison announced the database that does not need to be operated by humans. And people talking about the death of DBA, about the future of DBA, about DBA 3.0,…

Those are all generalizations and universal statements, and I don’t like generalizations. There is not only one type of DBA role. The DBA role in big US companies (where most of those claims come from) is very different than the DBA role in European medium companies (where I’m doing most of my job). The only thing I like with generalization is that a simple counterexample is sufficient to prove that the universal statement is wrong. And I’ll take the example I know the best: mine.

CaptureEM
What do you call DBA role? Is it only server management, or database management? Are you a Dev DBA or an Ops DBA? And when you will go multitenant, will you become a CDB DBA or PDB DBA? And on Engineered Systems, are you still a Database administrator or a Database Machine administrator?

So here is my experience. Let’s flashback to sysdate-8000.
6580a20eb9faaba67ff143dcd7bfcbdc
My first job with an Oracle Database was in 1994. I was working at Alcatel in Paris. The IT was typical of big companies at that time: all IBM, databases were DB2 on mainframe and applications were developed through a several years waterfall cycle from specifications to production. But I was not working there. I was in the accounting department which had his own little IT with some Borland Paradox small applications. This IT department was one senior person, from whom I’ve learned everything, and me, junior developer doing some Borland Paradox things. When came the need for a new application, the idea was to build a prototype in this ‘rebel’ IT service rather than waiting for the whole cycle of development managed by the official IT department.

mainpictWe asked SUN to lend us a workstation. We asked Oracle to lend us Oracle 7.1 database. That was not difficult. Both of them were happy to try to come into this all-IBM company by another way. And Borland had a new product: Delphi so this is where I started to build the prototype. I had everything to learn there: I had never installed a Unix system, I had never installed a database, I even never configured a TCP/IP network. But with the books (no internet then) and the help of my manager (did I say I owe him everything?) we got the environment ready within one month.

aba590f96f7b8138deb71fe28526fb93Today we are talking about Cloud PaaS as the only way to get quickly an environment to start a new development project. The marketing explains that you can get the environment with a few clicks and operational one hour later. But in real life, I know several projects where the environment is not ready after one month, for different reasons (time to choose which service, evaluate the cost, set-it up). Remember that in my case, 23 years ago, it took one or two months but I had a full server, enough storage, available 24/7, with the workstation on my desk. And all that for free.

CaptureDelphiSo I started to develop the application. The business users were there in next room. A short meeting, a small doc, and the conception of the first prototype was ready. Development with Delphi was really quick (remember RAD – Rapid Application Development?) and as soon as I had a usable prototype, one user had access to it, giving me their feedback for future evolutions. We have built something very clever: easy to evolve, fit to business needs, with good performance, and easy to deploy. It has been replaced years later by an application provided by the IT department, but our application was used a the specification.

So, what was my role here? I was clearly a developer and not a DBA. But I was already designing a system, installing a server, creating a database, modeling the data and analyzing performance. When interacting with the database, I was just a DBA doing some SQL. If I want to invent new words myself, I would call that an SQL*DBA, like the name of the tool that was replaced at that time by svrmgrl for the DBA stuff related to the server management. All that has been consolidated into the same tools later: sqlplus does the DBA and Developer stuff, Enterprise manager does both, SQL Developer does both…

During the 20 years later, I’ve evolved to a DBA. I’ve learned new technologies each time, but I don’t think that my DBA role has changed. I’ve done BI (called ‘infocenter’ at that time and ‘datawarehouse’ later) with Business Objects and DB2 and Data Propagator (yes, logical replication and that was in 1996). All this was designed in cooperation with the business end-users. In 2001 I’ve managed terabyte databases full of numbers, doing what we call Big Data today. All maintenance tasks (capacity planning, upgrade, recovery) were done very closely to the business utilization of this data. I administered telecom databases at a time where mobile phone providers came with a new idea every month to be implemented. We would call that ‘agile’ today. I’ve setup databases for quick cloning with transportable tablespaces. I’ve setup continuous integration tests of databases based on flashback technologies and workspace manager. I’ve configured parallel query and XMLDB to do analytics on unstructured data, with better results than MapReduce PoC. Technology evolves, names are modernized, but my DBA role is the same and I still use SQL.

The latest I’ve read about this changing role is Penny Avril interview and it is a very good explanation of all those changes announced. I totally agree with all of that. Except that I see no change in my role there. Let’s take this: DBAs are being asked to understand what businesses do with data rather than just the mechanics of keeping the database healthy and running.
I have always known which business users and which business cases are interacting with the database. My resume always mentioned the business area of each project. I’ve always interacted with end-users and developers for any database I administered, whether there are DEV, TEST or PROD databases. You cannot setup a backup/recovery plan without knowing the data and the business requirements. You cannot upgrade or migrate without interacting with users to test and validate the migration. You cannot address performance without interacting with users and developers. You cannot size the SGA without knowing how the data is used, at the different times of the day or the week.You cannot keep database healthy without knowing how it is used. You cannot build an infrastructure architecture without the support of the business for the software costs.

My DBA job is not a mechanics to keep processes running on a server. That would be a Database System administrator. But we are talking about DataBase Administrator. The major part of my job, and the main reason why I like it, is the human interaction that is around the database. You talk to server/storage/network administrators, you talk to all kind of business users, you talk to developers, you talk to managers, you talk to vendors,… You have to understand OS schedulers, network security, and mutexes, and also have to understand banking, retail, hospital data workflow. Then I don’t worry about the self-driven/no-human-labor part of the DBA role that may be moved to be managed by the cloud provider. Those are boring things that we already automated long time ago. For example, at dbi-services we have included all this automation into the DMK. And this goes further for open source databases with the OpenDB Appliance. Do you think a consulting company would provide this for free to their customers if this automation takes all the DBA job out? The boring and recurring things are automated to avoid errors, and all the intelligent stuff is provided by experienced human DBAs talking SQL with the system. As always.

 

Cet article Am I a DBA 3.0 or just an SQL*DBA? est apparu en premier sur Blog dbi services.

Oracle SOA Suite and WebLogic: Overview of key and keystore configuration

Amis Blog - Sun, 2017-09-24 09:31

Keystores and the keys within can be used for security on the transport layer and application layer in Oracle SOA Suite and WebLogic Server. Keystores hold private keys (identity) but also public certificates (trust). This is important when WebLogic / SOA Suite acts as the server but also when it acts as the client. In this blog post I’ll explain the purpose of keystores, the different keystore types available and which configuration is relevant for which keystore purpose.

Why use keys and keystores?

The below image (from here) illustrates the TCP/IP model and how the different layers map to the OSI model. When in the below elaboration, I’m talking about the application and transport layers, I mean the TCP/IP model layers and more specifically for HTTP.

The two main reasons why you might want to employ keystores are that

  • you want to enable security measures on the transport layer
  • you want to enable security measures on the application layer

Almost all of the below mentioned methods/techniques require the use of keys and you can imagine the correct configuration of these keys within SOA Suite and WebLogic Server is very important. They determine which clients can be trusted, how services can be called and also how outgoing calls identity themselves.

You could think transport layer and application layer security are two completely separate things. Often they are not that separated though. The combination of transport layer and application layer security has some limitations and often the same products / components are used to configure both.

  • Double encryption is not allowed. See here. ‘U.S. government regulations prohibit double encryption’. Thus you are not allowed to do encryption on the transport layer and application layer at the same time. This does not mean you cannot do this though, but you might encounter some product restrictions since, you know, Oracle is a U.S. company.
  • Oracle Webservice Manager (OWSM) allows you to configure policies that perform checks if transport layer security is used (HTTPS in this case) and is also used to configure application level security. You see this more often that a single product is used to perform both transport layer and application layer security. For example also API gateway products such as Oracle API Platform Cloud Service.
Transport layer (TLS)

Cryptography is achieved by using keys from keystores. On the transport layer you can achieve

You can read more on TLS in SOA Suite here.

Application layer

On application level you can achieve similar feats (authentication, integrity, security, reliability), however often more fine grained such as for example on user level or on a specific part of a message instead of on host level or for the entire connection. Performance is usually not as good as with transport layer security because the checks which need to be performed, can require actual parsing of messages instead of securing the transport (HTTP) connection as a whole regardless of what passes through. The implementation depends on the application technologies used and is thus quite variable.

  • Authentication by using security tokens such as for example
    • SAML. SAML tokens can be used in WS-Security headers for SOAP and in plain HTTP headers for REST.
    • JSON Web Tokens (JWT) and OAuth are also examples of security tokens
    • Certificate tokens in different flavors can be used which directly use a key in the request to authenticate.
    • Digest authentication can also be considered. Using digest authentication, a username-password token is created which is send using WS-Security headers.
  • Security and reliability by using message protection. Message protection consists of measures to achieve message confidentiality and integrity. This can be achieved by
    • signing. XML Signature can be used for SOAP messages and is part of the WS Security standard. Signing can be used to achieve message integrity.
    • encrypting. Encrypting can be used to achieve confidentiality.
Types of keystores

There are two types of keystores in use in WebLogic Server / OPSS. JKS keystores and KSS keystores. To summarize the main differences see below table:

JKS

There are JKS keystores. These are Java keystores which are saved on the filesystem. JKS keystores can be edited by using the keytool command which is part of the JDK. There is no direct support for editing JKS keystores from WLST, WebLogic Console or Fusion Middleware Control. You can use WLST however to configure which JKS file to use. For example see here

connect('weblogic','Welcome01','t3://localhost:7001') 
edit()
startEdit()
cd ('Servers/myserver/ServerMBean/myserver')

cmo.setKeyStores('CustomIdentityAndCustomTrust')
cmo.setCustomIdentityKeyStoreFileName('/path/keystores/Identity.jks') 
cmo.setCustomIdentityKeyStorePassPhrase('passphrase') 
cmo.setCustomIdentityKeyStoreType('JKS')
cmo.setCustomIdentityKeyStoreFileName('/path/keystores/Trust.jks') 
cmo.setCustomTrustKeyStorePassPhrase('passphrase') 
cmo.setCustomTrustKeyStoreType('JKS')

save()
activate()
disconnect()

Keys in JKS keystores can have passwords as can keystores themselves. If you use JKS keystores in OWSM policies, you are required to configure the key passwords in the credential store framework (CSF). These can be put in the map: oracle.wsm.security and can be called: keystore-csf-key, enc-csf-key, sign-csf-key. Read more here. In a clustered environment you should make sure all the nodes can access the configured keystores/keys by for example putting them on a shared storage.

KSS

OPSS also offers KeyStoreService (KSS) keystores. These are saved in a database in an OPSS schema which is created by executing the RCU (repository creation utility) during installation of the domain. KSS keystores are the default keystores to use since WebLogic Server 12.1.2 (and thus for SOA Suite since 12.1.3). KSS keystores can be configured to use policies to determine if access to keys is allowed or passwords. The OWSM does not support using a KSS keystore which is protected with a password (see here: ‘Password protected KSS keystores are not supported in this release’) thus for OWSM, the KSS keystore should be configured to use policy based access.

KSS keys cannot be configured to have a password and using keys from a KSS keystore in OWSM policies thus do not require you to configure credential store framework (CSF) passwords to access them. KSS keystores can be edited from Fusion Middleware Control, by using WLST scripts or even by using a REST API (here). You can for example import JKS files quite easily into a KSS store with WLST using something like:

connect('weblogic','Welcome01','t3://localhost:7001')
svc = getOpssService(name='KeyStoreService')
svc.importKeyStore(appStripe='mystripe', name='keystore2', password='password',aliases='myOrakey', keypasswords='keypassword1', type='JKS', permission=true, filepath='/tmp/file.jks')
Where and how are keystores / keys configured

As mentioned above, keys within keystores are used to achieve transport security and application security for various purposes. If we translate this to Oracle SOA Suite and WebLogic Server.

Transport layer Incoming
  • Keys are used to achieve TLS connections between different components of the SOA Suite such as Admin Servers, Managed Servers, Node Managers. The keystore configuration for those can be done from the WebLogic Console for the servers and manually for the NodeManager. You can configure identity and trust this way and if the client needs to present a certificate of its own so the server can verify its identity. See for example here on how to configure this.
  • Keys are used to allow clients to connect to servers via a secure connection (in general, so not specific for communication between WebLogic Server components). This configuration can be done in the same place as above, with the only difference that no manual editing of files on the filesystem is required (since no NodeManager is relevant here).

Outgoing Composites (BPEL, BPM)

Keys are be used to achieve TLS connections to different systems from the SOA Suite. The SOA Suite acts as the client here. The configuration of identity keystore can be done from Fusion Middleware Control by setting the KeystoreLocation MBean. See the below image. Credential store entries need to be added to store the identity keystore password and key password. Storing the key password is not required if it is the same as the keystore password. The credential keys to create for this are: SOA/KeystorePassword and SOA/KeyPassword with the user being the same as the keyalias from the keystore to use). In addition components also need to be configured to use a key to establish identity. In the composite.xml a property oracle.soa.two.way.ssl.enabled can be used to enable outgoing two-way-ssl from a composite.

Setting SOA client identity store for 2-way SSL

 

Specifying the SOA client identity keystore and key password in the credential store

Service Bus

The Service Bus configuration for outgoing SSL connections is quite different from the composite configuration. The following blog here describes the locations where to configure the keystores and keys nicely. In WebLogic Server console, you create a PKICredentialMapper which refers to the keystore and also contains the keystore password configuration. From the Service Bus project, a ServiceKeyProvider can be configured which uses the PKICredentialMapper and contains the configuration for the key and key password to use. The ServiceKeyProvider configuration needs to be done from the Service Bus console since JDeveloper can not resolve the credential mapper.

To summarize the above:

Overwriting keystore configuration with JVM parameters

You can override the keystores used with JVM system parameters such as javax.net.ssl.trustStore, javax.net.ssl.trustStoreType, javax.net.ssl.trustStorePassword, javax.net.ssl.keyStore, javax.net.ssl.keyStoreType, javax.net.ssl.keyStorePassword in for example the setDomainEnv script. These will override the WebLogic Server configuration and not the OWSM configuration (application layer security described below). Thus if you specify for example an alternative truststore by using the command-line, this will not influence HTTP connections going from SOA Suite to other systems. Even when message protection (using WS-Security) has been enabled, which uses keys and check trust. It will influence HTTPS connections though. For more detail on the above see here.

Application layer

Keys can be used by OWSM policies to for example achieve message protection on the application layer. This configuration can be done from Fusion Middleware Control.

The OWSM run time does not use the WebLogic Server keystore that is configured using the WebLogic Server Administration Console and used for SSL. The keystore which OWSM uses by default is kss://owsm/keystore since 12.1.2 and can be configured from the OWSM Domain configuration. See below for the difference between KSS and JKS keystores.

OWSM keystore contents and management from FMW Control

OWSM keystore domain config

In order for OWSM to use JKS keystores/keys, credential store framework (CSF) entries need to be created which contain the keystore and key passwords. The OWSM policy configuration determines the key alias to use. For KSS keystores/keys no CSF passwords to access keystores/keys are required since OWSM does not support KSS keystores with password and KSS does not provide a feature to put a password on keys.

Identity for outgoing connections (application policy level, e.g. signing and encryption keys) is established by using OWSM policy configuration. Trust for SAML/JWT (secure token service and client) can be configured from the OWSM Domain configuration.

Finally This is only the tip of the iceberg

There is a lot to tell in the area of security. Zooming in on transport and application layer security, there is also a wide range of options and do’s and don’ts. I have not talked about the different choices you can make when configuring application or transport layer security. The focus of this blog post has been to provide an overview of keystore configuration/usage and thus I have not provided much detail. If you want to learn more on how to achieve good security on your transport layer, read here. To configure 2-way SSL using TLS 1.2 on WebLogic / SOA Suite, read here. Application level security is a different story altogether and can be split up in a wide range of possible implementation choices.

Different layers in the TCP/IP model

If you want to achieve solid security, you should look at all layers of the TCP/IP model and not just at the transport and application layer. Thus it also helps if you use different security zones, divide your network so your development environment cannot by accident access your production environment or the other way around.

Final thoughts on keystore/key configuration in WebLogic/SOA Suite

When diving into the subject, I realized using and configuring keys and keystores can be quite complex. The reason for this is that it appears that for every purpose of a key/keystore, configuration in a different location is required. It would be nice if that was it, however sometimes configuration overlaps such as for example the configuration of the truststore used by WebLogic Server which is also used by SOA Suite. This feels inconsistent since for outgoing calls, composites and service bus use entirely different configuration. It would be nice if it could be made a bit more consistent and as a result simpler.

The post Oracle SOA Suite and WebLogic: Overview of key and keystore configuration appeared first on AMIS Oracle and Java Blog.

Schema On Read?

Dylan's BI Notes - Sun, 2017-09-24 07:48
I saw “create external table ” first in Oracle DBMS 11G. It was created for the purpose of loading data. When Hive was introduced, a lot of data were already created in HDFS. Hive was introduced to provided the SQL interface on these data. Using the external table concept is a nature of the design.  […]
Categories: BI & Warehousing

Excellent reference for understanding technical terms related todays IT….

OracleApps Epicenter - Sat, 2017-09-23 22:01
Here is the Ultimate Fin Tech Glossary got a fwd email (source internet).
Categories: APPS Blogs

Grid Infrastructure 12.2.0.1 CRS_SWONLY Silent Install

Michael Dinh - Sat, 2017-09-23 18:00

Configuring Software Binaries for Oracle Grid Infrastructure for a Standalone Server

CREATE DIRECTORIES:

[root@arrow1 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),994(vboxsf)

[root@arrow1 ~]# mkdir /u01
[root@arrow1 ~]# chmod 775 /u01/
[root@arrow1 ~]# chown oracle:oinstall /u01/

[root@arrow1 ~]# vi /etc/oraInst.loc
[root@arrow1 ~]# chown oracle:oinstall /etc/oraInst.loc

[root@arrow1 ~]# cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

[root@arrow1 ~]# ll /etc/oraInst.loc
-rw-r--r-- 1 oracle oinstall 56 Sep 23 17:11 /etc/oraInst.loc
[root@arrow1 ~]# exit

[oracle@arrow1 ~]$ mkdir -p /u01/app/12.2.0.1/grid
[oracle@arrow1 ~]$ mkdir -p /u01/app/grid
[oracle@arrow1 ~]$ mkdir -p /u01/app/oraInventory

EXTRACT SOFTWARE DIRECTLY TO GRID HOME:

[oracle@arrow1 ~]$ ls -l /sf_OracleSoftware/12.2.0.1_Linux_64/
total 6297251
-rwxrwxrwx 1 vagrant vagrant 3453696911 Apr 12 09:44 linuxx64_12201_database.zip
-rwxrwxrwx 1 vagrant vagrant 2994687209 Apr 12 09:43 linuxx64_12201_grid_home.zip

[oracle@arrow1 ~]$ unzip -d /u01/app/12.2.0.1/grid -qo /sf_OracleSoftware/12.2.0.1_Linux_64/linuxx64_12201_grid_home.zip; echo $?
0

[oracle@arrow1 ~]$ ls /u01/app/12.2.0.1/grid
addnode     crs     deinstall    gpnp           inventory  lib      opmn     oui      qos       rootupgrade.sh  srvm      welcome.html
assistants  css     demo         gridSetup.sh   javavm     log      oracore  owm      racg      runcluvfy.sh    suptools  wlm
bin         cv      diagnostics  has            jdbc       md       ord      perl     rdbms     scheduler       tomcat    wwg
cdata       dbjava  dmu          hs             jdk        network  ordim    plsql    relnotes  slax            ucp       xag
cha         dbs     env.ora      install        jlib       nls      ords     precomp  rhp       sqlpatch        usm       xdk
clone       dc_ocm  evm          instantclient  ldap       OPatch   oss      QOpatch  root.sh   sqlplus         utl

[oracle@arrow1 ~]$ ls /u01/app/12.2.0.1/grid/install/response/
grid_2017-01-26_04-10-28PM.rsp  gridsetup.rsp  sample.ccf
[oracle@arrow1 ~]$

CREATE RESPONSE FILE FOR CRS_SWONLY – WITHOUT ASM:

oracle@arrow1::/u01/app/12.2.0.1/grid/install/response
$ cp -v gridsetup.rsp grid_crs_swonly.rsp
‘gridsetup.rsp’ -> ‘grid_crs_swonly.rsp’

oracle@arrow1::/u01/app/12.2.0.1/grid/install/response
$ diff -iwyB --suppress-common-lines -W 150 gridsetup.rsp grid_crs_swonly.rsp; echo
oracle.install.option=                                                    |     oracle.install.option=CRS_SWONLY
ORACLE_BASE=                                                              |     ORACLE_BASE=/u01/app/grid
oracle.install.asm.OSDBA=                                                 |     oracle.install.asm.OSDBA=dba
oracle.install.asm.OSOPER=                                                |     oracle.install.asm.OSOPER=dba
oracle.install.asm.OSASM=                                                 |     oracle.install.asm.OSASM=dba
oracle@arrow1::/u01/app/12.2.0.1/grid/install/response

RUNCLUVFY FOR HACFG:

oracle@arrow1::/u01/app/12.2.0.1/grid
$ ./runcluvfy.sh stage -pre hacfg

Verifying Physical Memory ...FAILED (PRVF-7530)
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...PASSED
Verifying Free Space: arrow1:/usr,arrow1:/var,arrow1:/etc,arrow1:/sbin,arrow1:/tmp ...PASSED
Verifying User Existence: oracle ...
  Verifying Users With Same UID: 54321 ...PASSED
Verifying User Existence: oracle ...PASSED
Verifying Group Existence: dba ...PASSED
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: oinstall(Primary) ...PASSED
Verifying Group Membership: dba ...PASSED
Verifying Run Level ...PASSED
Verifying Hard Limit: maximum open file descriptors ...PASSED
Verifying Soft Limit: maximum open file descriptors ...PASSED
Verifying Hard Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum stack size ...PASSED
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils-2.23.52.0.1 ...PASSED
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
Verifying Package: sysstat-10.1.5 ...PASSED
Verifying Package: ksh ...PASSED
Verifying Package: make-3.82 ...PASSED
Verifying Package: glibc-2.17 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-6.2-4 ...PASSED
Verifying Package: net-tools-2.0-0.17 ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...PASSED

Pre-check for Oracle Restart configuration was unsuccessful.


Failures were encountered during execution of CVU verification request "stage -pre hacfg".

Verifying Physical Memory ...FAILED
arrow1: PRVF-7530 : Sufficient physical memory is not available on node
        "arrow1" [Required physical memory = 8GB (8388608.0KB)]


CVU operation performed:      stage -pre hacfg
Date:                         Sep 23, 2017 5:22:21 PM
CVU home:                     /u01/app/12.2.0.1/grid/
User:                         oracle
oracle@arrow1::/u01/app/12.2.0.1/grid
$

INSTALL GRID USING gridSetup.sh

oracle@arrow1::/u01/app/12.2.0.1/grid
$ ./gridSetup.sh -silent -waitforcompletion -skipPrereqs -responseFile /u01/app/12.2.0.1/grid/install/response/grid_crs_swonly.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-41808] Possible invalid choice for OSASM Group.
   CAUSE: The name of the group you selected for the OSASM group is commonly used to grant other system privileges (For example: asmdba, asmoper, dba, oper).
   ACTION: Oracle recommends that you designate asmadmin as the OSASM group.
[WARNING] [INS-41809] Possible invalid choice for OSDBA Group.
   CAUSE: The group name you selected as the OSDBA for ASM group is commonly used for Oracle Database administrator privileges.
   ACTION: Oracle recommends that you designate asmdba as the OSDBA for ASM group, and that the group should not be the same group as an Oracle Database OSDBA group.
[WARNING] [INS-41810] Possible invalid choice for OSOPER Group.
   CAUSE: The group name you selected as the OSOPER for ASM group is commonly used for Oracle Database administrator privileges.
   ACTION: Oracle recommends that you designate asmoper as the OSOPER for ASM group, and that the group should not be the same group as an Oracle Database OSOPER group.
[WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
   CAUSE: The group you selected for granting the OSDBA for ASM group for database access, and the OSOPER for ASM group for startup and shutdown of Oracle ASM, is the same group as the OSASM group, whose members have SYSASM privileges on Oracle ASM.
   ACTION: Choose different groups as the OSASM, OSDBA for ASM, and OSOPER for ASM groups.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2017-09-23_05-52-14PM/gridSetupActions2017-09-23_05-52-14PM.log

As a root user, execute the following script(s):
        1. /u01/app/12.2.0.1/grid/root.sh

Execute /u01/app/12.2.0.1/grid/root.sh on the following nodes:
[arrow1]


Successfully Setup Software.
oracle@arrow1::/u01/app/12.2.0.1/grid
$

RUN root.sh

[root@arrow1 ~]# /u01/app/12.2.0.1/grid/root.sh
Check /u01/app/12.2.0.1/grid/install/root_arrow1_2017-09-23_17-54-24-226719765.log for the output of root script

[root@arrow1 ~]# cat /u01/app/12.2.0.1/grid/install/root_arrow1_2017-09-23_17-54-24-226719765.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.2.0.1/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Cluster or Grid Infrastructure for a Stand-Alone Server execute the following command as oracle user:
/u01/app/12.2.0.1/grid/gridSetup.sh
This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

[root@arrow1 ~]#

This is where Oracle has take a turn for the worse.
Instructions are not provided after running root.sh and are incorrect!

AS ROOT RUN roothas.pl

[root@arrow1 ~]# /u01/app/12.2.0.1/grid/perl/bin/perl -I /u01/app/12.2.0.1/grid/perl/lib -I /u01/app/12.2.0.1/grid/crs/install /u01/app/12.2.0.1/grid/crs/install/roothas.pl
Using configuration parameter file: /u01/app/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/arrow1/crsconfig/roothas_2017-09-23_06-11-34PM.log
2017/09/23 18:11:35 CLSRSC-363: User ignored prerequisites during installation
LOCAL ADD MODE
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node arrow1 successfully pinned.
2017/09/23 18:11:44 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'arrow1'
CRS-2673: Attempting to stop 'ora.evmd' on 'arrow1'
CRS-2677: Stop of 'ora.evmd' on 'arrow1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'arrow1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.

arrow1     2017/09/23 18:12:54     /u01/app/12.2.0.1/grid/cdata/arrow1/backup_20170923_181254.olr     0
2017/09/23 18:12:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
[root@arrow1 ~]#

AS ORACLE runInstaller -updateNodeList

oracle@arrow1::/u01/app/12.2.0.1/grid
$ cd oui/bin/

oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0.1/grid -defaultHomeName CLUSTER_NODES= CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4080 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin

CHECK PROCESESS

oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$ ps -ef|grep oracle
root      3796  1125  0 17:03 ?        00:00:00 sshd: oracle [priv]
oracle    3798  3796  0 17:03 ?        00:00:01 sshd: oracle@pts/2
oracle    3799  3798  0 17:03 pts/2    00:00:00 -bash
root     13572  1125  0 17:39 ?        00:00:00 sshd: oracle [priv]
oracle   13574 13572  0 17:39 ?        00:00:00 sshd: oracle@pts/0
oracle   13575 13574  0 17:39 pts/0    00:00:00 -bash
oracle   25084     1  0 18:12 ?        00:00:03 /u01/app/12.2.0.1/grid/bin/ohasd.bin reboot
oracle   25197     1  0 18:12 ?        00:00:02 /u01/app/12.2.0.1/grid/bin/oraagent.bin
oracle   25215     1  0 18:12 ?        00:00:01 /u01/app/12.2.0.1/grid/bin/evmd.bin
oracle   25269 25215  0 18:12 ?        00:00:01 /u01/app/12.2.0.1/grid/bin/evmlogger.bin -o /u01/app/12.2.0.1/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u01/app/12.2.0.1/grid/log/[HOSTNAME]/evmd/evmlogger.log
oracle   25404  3799  0 18:27 pts/2    00:00:00 ps -ef
oracle   25405  3799  0 18:27 pts/2    00:00:00 grep --color=auto oracle
oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$

CHECK INSTALL LOG

oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$ export LOG=/u01/app/oraInventory/logs/GridSetupActions2017-09-23_05-52-14PM/gridSetupActions2017-09-23_05-52-14PM.log

oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$ grep -e '[[:upper:]]: ' $LOG |cut -d ":" -f1|sort -u
   ACTION
   CAUSE
INFO
WARNING

oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$ grep -e '[[:upper:]]: ' $LOG|grep "^WARNING: "
WARNING:  [Sep 23, 2017 5:52:18 PM] Unable to find the namespace URI. Reason: Start of root element expected.
WARNING:  [Sep 23, 2017 5:52:18 PM] Unable to find the namespace URI. Reason: Start of root element expected.
WARNING:  [Sep 23, 2017 5:52:21 PM] Validation disabled for the state init
WARNING:  [Sep 23, 2017 5:52:26 PM] Validation disabled for the state flowDecider
WARNING:  [Sep 23, 2017 5:52:26 PM] Validation disabled for the state CRSNodeInformationUI
WARNING:  [Sep 23, 2017 5:52:26 PM] No Local Domain found, setting to whatever first domain is available
WARNING:  [Sep 23, 2017 5:52:26 PM] No Domain found, skipping Same Domain validation
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41808] Possible invalid choice for OSASM Group.
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41809] Possible invalid choice for OSDBA Group.
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41810] Possible invalid choice for OSOPER Group.
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
WARNING:  [Sep 23, 2017 5:52:29 PM] Validation disabled for the state prereqExecutionDecider
WARNING:  [Sep 23, 2017 5:52:29 PM] Validation disabled for the state showSummary
WARNING:  [Sep 23, 2017 5:52:29 PM] Unable to find the namespace URI. Reason: /u01/app/12.2.0.1/grid/install/response/grid_2017-09-23_05-52-14PM.rsp (No such file or directory)
WARNING:  [Sep 23, 2017 5:53:50 PM] Validation disabled for the state finish
WARNING:  [Sep 23, 2017 5:53:50 PM] Could not create directory: /u01/app/oraInventory/logs/GridSetupActions2017-09-23_05-52-14PM
oracle@arrow1::/u01/app/12.2.0.1/grid/oui/bin
$

Wrong result with multitenant, dba_contraints and current_schema

Yann Neuhaus - Sat, 2017-09-23 15:03

Multitenant architecture is not such a big change and this is why I recommend it when you start a project in 12c or if you upgrade to 12.2 – of course after thoroughly testing your application. However, there is a point where you may encounter problems on dictionary queries, because it is really a big change internally. The dictionary separation has several side effects. You should test carefully the queries you do on the dictionary views to get metadata. Here is an example of a bug I recently encountered.

This happened with a combination of things you should not do very often, and not in a critical use case: query dictionary for constraints owned by your current schema, when different than the user you connect with.

I create two users: USER1 and USER2
SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> grant dba to USER1 identified by USER1 container=current;
Grant succeeded.
SQL> grant dba to USER2 identified by USER2 container=current;
Grant succeeded.

USER1 owns a table which has a constraint:

SQL> connect USER1/USER1@//localhost/PDB1
Connected.
SQL> create table DEMO(dummy constraint pk primary key) as select * from dual;
Table DEMO created.

USER2 can access to the table either by prefixing it with USER1 or by setting the current_schema to USER1

SQL> connect USER2/USER2@//localhost/PDB1
Connected.
SQL> alter session set current_schema=USER1;
Session altered.

Bug

Ok, now imagine you want to read constraint metadata for the current schema you have set:

SQL> select sys_context('USERENV','CURRENT_SCHEMA'), a.*
2 from sys.dba_constraints a
3 where owner = sys_context('USERENV','CURRENT_SCHEMA')
4 /
 
no rows selected

No rows selected is a wrong result here because my current_schema is USER1 and USER1 has constraints:

SQL> select owner,constraint_name
2 from sys.dba_constraints a
3 where owner = 'USER1'
4 /
OWNER CONSTRAINT_NAME
----- ---------------
USER1 PK

So, where’s the problem? Let’s have a look at the execution plan:

SQL_ID 2fghqwz1cktyf, child number 0
-------------------------------------
select sys_context('USERENV','CURRENT_SCHEMA'), a.* from
sys.dba_constraints a where owner =
sys_context('USERENV','CURRENT_SCHEMA')
 
Plan hash value: 1258862619
 
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.32 | 2656 |
| 1 | PARTITION LIST ALL | | 1 | 2 | 0 |00:00:00.32 | 2656 |
|* 2 | EXTENDED DATA LINK FULL| INT$INT$DBA_CONSTRAINTS | 2 | 2 | 0 |00:00:00.32 | 2656 |
--------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter((("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=4 OR
("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=2 AND "INT$INT$DBA_CONSTRAINTS"."ORIGIN_CON_ID"=TO_NUMBER(SY
S_CONTEXT('USERENV','CON_ID')))) AND "OWNER"=SYS_CONTEXT('USERENV','CURRENT_SCHEMA')))

I am in 12.2 and DBA_CONSTRAINTS reads from INT$DBA_CONSTRAINTS which reads from INT$INT$DBA_CONSTRAINTS and in multitenant this view being an extended data view will read from CDB$ROOT and from the current container. This is why we see EXTENDED DATA LINK FULL in the execution plan and up to this point the predicates are correct: “OWNER”=SYS_CONTEXT(‘USERENV’,’CURRENT_SCHEMA’)

The execution through data link is run on each container with parallel processes: they switch to the container and run the underlying query on the view. But when I look at the sql trace of the parallel process running the query on my PDB I can see that the predicate on OWNER has replaced the SYS_CONTEXT(‘USERENV’,’CURRENT_SCHEMA’) with the hardcoded value:

SELECT /*+ NO_STATEMENT_QUEUING RESULT_CACHE (SYSOBJ=TRUE) OPT_PARAM('_ENABLE_VIEW_PDB', 'FALSE') */ OWNER,CONSTRAINT_NAME,CONSTRAINT_TYPE,TABLE_NAME,OBJECT_TYPE#,SEARCH_CONDITION,SEARCH_CONDITION_VC,R_OWNER,R_CONSTRAINT_NAME,DELETE_RULE,STATUS,DEFERRABLE,DEFERRED,VALIDATED,GENERATED,BAD,RELY,LAST_CHANGE,INDEX_OWNER,INDEX_NAME,INVALID,VIEW_RELATED,ORIGIN_CON_ID FROM NO_COMMON_DATA(SYS."INT$INT$DBA_CONSTRAINTS") "INT$INT$DBA_CONSTRAINTS" WHERE ("INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=4 OR "INT$INT$DBA_CONSTRAINTS"."OBJECT_TYPE#"=2 AND "INT$INT$DBA_CONSTRAINTS"."ORIGIN_CON_ID"=TO_NUMBER('3')) AND "INT$INT$DBA_CONSTRAINTS"."OWNER"=q'"USER2"'

And unfortunately, this value is not the right one: USER2 is my connected user, but not the CURRENT_SCHEMA that I have set. In the same trace, I can see where this value comes from:

select 'q''"' || SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA') || '"''' from sys.dual

but it seems that the current_schema was lost through the call to the parallel process and the PDB switch to my container.

Workaround

The problem is easy to workaround. This works:

SQL> select owner,constraint_name
2 from sys.dba_constraints a
3 where owner = ( select sys_context('USERENV','CURRENT_SCHEMA') from dual )
4 /
 
OWNER CONSTRAINT_NAME
----- ---------------
USER1 PK

And anyway, better to get the current schema before and pass it as a bind variable. The bind variables are passed correctly through data link queries:


SQL> variable v varchar2(30)
SQL> exec select sys_context('USERENV','CURRENT_SCHEMA') into :v from dual;
 
PL/SQL procedure successfully completed.
 
SQL> select sys_context('USERENV','CURRENT_SCHEMA'), a.*
2 from sys.dba_constraints a
3 --where owner = sys_context('USERENV','CURRENT_SCHEMA')
4 where owner = :v
5 /

So what?

The multitenant architecture is a real challenge for dictionary views. The dictionary is separated: system metadata in CDB$ROOT and user metadata in PDB. But, because of compatibility with non-CDB architecture, the dictionary views must show both of them, and this is where it becomes complex: what was separated on purpose has now to be merged. And complexity is subject to bugs. If you want to get an idea, have a look at dcore.sql in ORACLE_HOME/rdbms/admin and compare 11g version with 12c ones, with all the evolution in 12.1.0.1, 12.1.0.2 and 12.2.0.1

 

Cet article Wrong result with multitenant, dba_contraints and current_schema est apparu en premier sur Blog dbi services.

Documentum – RCS 7.3 – Issue with projections

Yann Neuhaus - Sat, 2017-09-23 11:42

In the last two blogs I posted, I mentioned that I was working on silent installations with CS 7.3 (in HA). The issue I will describe in this blog is linked to the HA installation. I faced this issue using the silent installation but I’m guessing it should also occurs using the GUI Installation.

 

So basically on the first Content Server, I installed a Global Registry and a few other docbases. For the creation of the docbases, there were obviously a docbroker installed. This is the global docbroker which was targeted by default by all docbases using the server.ini. In addition to that, I installed 2 other docbrokers for specific docbases. The purpose here is to have the docbroker N°1 only for the GR + DocBase1 and the docbroker N°2 only for the GR + DocBase2 + DocBase3. So I started to configure the projections to achieve that goal.

 

I will use below the GR. If you followed what I mentioned above, I should see – at the end – two projections in the docbase (+ the default one from the server.ini). So let’s configure the projections (with only one Content Server installed):

[dmadmin@content-server-01 ~]$ iapi GR_DocBase
Please enter a user (dmadmin):
Please enter password for dmadmin:


        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0040.0025


Connecting to Server using docbase GR_DocBase
[DM_SESSION_I_SESSION_START]info:  "Session 010f123450003d07 started for user dmadmin."


Connected to Documentum Server running Release 7.3.0050.0039  Linux64.Oracle
Session id is s0
API> retrieve,c,dm_server_config
...
3d0f123450000102
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  title                           :
  ...
  projection_targets            []: <none>
  projection_ports              []: <none>
  projection_proxval            []: <none>
  projection_notes              []: <none>
  projection_enable             []: <none>
  ...
  i_vstamp                        : 28

API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
                                                                                                    
                                                                                                    
(2 rows affected)

API> set,c,l,projection_targets[0]
SET> content-server-01
...
OK
API> set,c,l,projection_ports[0]
SET> 1493
...
OK
API> set,c,l,projection_proxval[0]
SET> 1
...
OK
API> set,c,l,projection_notes[0]
SET> Dedicated Docbroker N°2
...
OK
API> set,c,l,projection_enable[0]
SET> T
...
OK
API> save,c,l
...
OK
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
  projection_ports             [0]: 1493
  projection_proxval           [0]: 1
  projection_notes             [0]: Dedicated Docbroker N°2
  projection_enable            [0]: T
  ...
  i_vstamp                        : 29

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
                                                                                                    
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(2 rows affected)

API> set,c,l,projection_targets[1]
SET> content-server-01
...
OK
API> set,c,l,projection_ports[1]
SET> 1491
...
OK
API> set,c,l,projection_proxval[1]
SET> 1
...
OK
API> set,c,l,projection_notes[1]
SET> Dedicated Docbroker N°1
...
OK
API> set,c,l,projection_enable[1]
SET> T
...
OK
API> save,c,l
...
OK
API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
  projection_ports             [0]: 1493
                               [1]: 1491
  projection_proxval           [0]: 1
                               [1]: 1
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
  projection_enable            [0]: T
                               [1]: T
  ...
  i_vstamp                        : 30

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(2 rows affected)

 

Up to this point, everything sounds good and everything is working. If you try to access DA, you will see two projections for the Global Registry that are the two we defined above:

Projection1

 

So this is working. Now what is the goal of this blog? Well it’s not what is above… So let’s talk about the issue now! The next step in this environment was to set it in High Availability. As mentioned my last blog, I faced some issues with the Remote Content Server (RCS) installation but I was finally able to install a Content-File Server. When I finished the installation of GR_DocBase on the RCS, I wanted to setup the projections between the CS1 and CS2… For that, I opened DA again and I went to the same screen that you can see above (in the screenshot).

 

What I was expecting to see was the exact same thing as above, meaning two projections of the GR_DocBase with the two docbrokers installed on the CS1. What I saw was a little bit different…:

Projection2

 

I can assure you that between the two screenshots above, the only thing I did was to install the CFS for GR_DocBase on the RCS… So why is this screen not working anymore, what’s the issue? “An error has occurred. 1 >= 1″. Yeah sure, what else? ;)

 

I checked the Documentum Administrator logs and found the following stack trace:

05:45:07,980 ERROR [[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)'] com.documentum.web.common.Trace - 1 >= 1
java.lang.ArrayIndexOutOfBoundsException: 1 >= 1
        at java.util.Vector.elementAt(Vector.java:474)
        at com.documentum.fc.common.DfList.get(DfList.java:427)
        at com.documentum.fc.common.DfList.getString(DfList.java:561)
        at com.documentum.webcomponent.admin.server.ServerConnectionBrokerList.loadConnectionBrokerProjections(ServerConnectionBrokerList.java:78)
        at com.documentum.webcomponent.admin.server.ServerConnectionBrokerList.onInit(ServerConnectionBrokerList.java:51)
        at com.documentum.web.form.FormProcessor.invokeMethod(FormProcessor.java:1604)
        at com.documentum.web.form.FormProcessor.invokeMethod(FormProcessor.java:1489)
        at com.documentum.web.form.FormProcessor.fireOnInitEvent(FormProcessor.java:1154)
        at com.documentum.web.form.ControlTag.fireFormOnInitEvent(ControlTag.java:794)
        at com.documentum.web.formext.control.component.ComponentIncludeTag.renderEnd(ComponentIncludeTag.java:135)
        at com.documentum.web.form.ControlTag.doEndTag(ControlTag.java:928)
        at jsp_servlet._webcomponent._library._propertysheetwizardcontainer.__propertysheetwizardcontainer._jsp__tag25(__propertysheetwizardcontainer.java:1501)
        at jsp_servlet._webcomponent._library._propertysheetwizardcontainer.__propertysheetwizardcontainer._jspService(__propertysheetwizardcontainer.java:368)
        at weblogic.servlet.jsp.JspBase.service(JspBase.java:35)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.ResponseHeaderControlFilter.doFilter(ResponseHeaderControlFilter.java:351)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.CompressionFilter.doFilter(CompressionFilter.java:96)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.env.WDKController.processRequest(WDKController.java:144)
        at com.documentum.web.env.WDKController.doFilter(WDKController.java:131)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3683)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3649)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2433)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2281)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2259)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1691)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1651)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:348)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:333)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:54)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:640)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:406)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:346)

 

As mentioned in the stack trace (which is more useful than the error message), the problem might come from the docbrokers projections… But we were checking them just before installing the Remote docbase and it was OK… With the CS 7.3, the RCS installation is automatically adding a projection on the Primary Content Server dm_server_config object to point to the Remote docbroker and this is causing this error…

 

So I did check the projections on the GR_DocBase Primary dm_server_config object to see what was the issue and I got this:

API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
                               [2]: content-server-02
  projection_ports             [0]: 1493
                               [1]: 1491
                               [2]: 1489
  projection_proxval           [0]: 1
                               [1]: 1
                               [2]: 2
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
                               [2]: Projecting primary to remote.
  projection_enable            [0]: T
                               [1]: T
                               [2]: T
  ...
  i_vstamp                        : 34

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-02          1489              2                   Projecting primary to remote.      1
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(3 rows affected)

 

So this looks good isn’t it? And yet through DA, it shows this ArrayIndexOutOfBoundsException… I tried a lot of things but the only way I could solve this issue was by removing the projection that the RCS Installer is automatically adding and then adding it again… To simplify the process, I included that in our silent scripts so it is done automatically after the creation of the CFS. Just below is a very small extract of code I added in our silent scripts (shell) in order to remove the projection added by the installer (some variables are obviously defined before this section of code…):

...
echo "  **  "
echo "  **  HA Docbase Configuration - projections"
echo "  **  "
dql_select_docbase="${script_folder}/rcs_select_${docbase}"
dql_update_docbase="${script_folder}/rcs_update_${docbase}"
idql ${docbase} -U${install_owner} -Pxxx <<EOF > ${dql_select_docbase}.log 2>&1
SELECT 'projection_id=', i_position*-1-1 as projection_id FROM dm_server_config WHERE LOWER(object_name) like LOWER('${docbase}') AND projection_notes='Projecting primary to remote.' AND LOWER(projection_targets) like LOWER('%`hostname -s`%') enable(ROW_BASED)
go
EOF
if [[ ${?} != 0 ]]; then
  echo "ERROR - There was a problem while gathering the index for ${docbase}. Please see the file '${dql_select_docbase}.log' for error. Exiting."
  exit
fi
index=`grep "^projection_id=[[:space:]]" ${dql_select_docbase}.log | sed 's,^.*=[[:space:]]*\([0-9]*\)[[:space:]]*$,\1,'`
if [[ ${index} == "" || ${index} == '""' ]]; then
  echo "INFO - There is no existing projections from the primary to the remote."
else
  echo -e "INFO - The index of 'dm_server_config.projection_targets' is:  \t${index}"
idql ${docbase} -U${install_owner} -Pxxx <<EOF > ${dql_update_docbase}.log 2>&1
UPDATE dm_server_config object REMOVE projection_targets[${index}], REMOVE projection_ports[${index}], REMOVE projection_proxval[${index}], REMOVE projection_notes[${index}], REMOVE projection_enable[${index}] WHERE LOWER(object_name) like LOWER('${docbase}') enable(ROW_BASED)
go
EOF
  nb_error=`grep "_E_" ${dql_update_docbase}.log | wc -l`
  cat "${dql_update_docbase}.log"
  echo ""
  if [[ ${nb_error} != 0 ]]; then
    echo "ERROR - There was a problem while updating the object for ${docbase}. Please see above. Exiting."
    exit
  fi
fi
...

 

This small section of code is just to shows how this kind of things can be automated. I’m just executing a first DQL to gather the index of the projection added by the RCS installer. This is the correct index because the note added by the RCS is always “Projecting primary to remote.” and the target host is obviously the RCS… To confirm that DA is now working again, you can reload the docbroker projection screen and for me it was indeed working.

 

So then the next step is to add the projection again but this time without the “.” at the end of the note (to differentiate them) because we still need this projection for the CS2 to be aware of the docbases on the CS1. You can either do it programmatically using iapi/idql or with DA directly. After adding this third projection again, I got the following (notice the only difference is the “.” in the notes) in the docbase:

API> dump,c,l
...
USER ATTRIBUTES

  object_name                     : GR_DocBase
  ...
  projection_targets           [0]: content-server-01
                               [1]: content-server-01
                               [2]: content-server-02
  projection_ports             [0]: 1493
                               [1]: 1491
                               [2]: 1489
  projection_proxval           [0]: 1
                               [1]: 1
                               [2]: 2
  projection_notes             [0]: Dedicated Docbroker N°2
                               [1]: Dedicated Docbroker N°1
                               [2]: Projecting primary to remote
  projection_enable            [0]: T
                               [1]: T
                               [2]: T
  ...
  i_vstamp                        : 36

API> reinit,c
...
OK
API> ?,c,select projection_targets, projection_ports, projection_proxval, projection_notes, projection_enable from dm_server_config where r_object_id = '3d0f123450000102'
projection_targets         projection_ports  projection_proxval  projection_notes                   projection_enable
-------------------------  ----------------  ------------------  ---------------------------------  -----------------
content-server-02          1489              2                   Projecting primary to remote       1
content-server-01          1491              1                   Dedicated Docbroker N°1            1
content-server-01          1493              1                   Dedicated Docbroker N°2            1
(3 rows affected)

 

The result is the same but I was finally able to see the three projections through DA… I don’t know what caused this error because I simply re-added the exact same thing with a different note (slightly) but I’m sure that this was caused by the RCS Installation…

Projection3

 

Yet another interesting behaviour of Documentum… So many ways to have fun! :)

 

 

Cet article Documentum – RCS 7.3 – Issue with projections est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator