Feed aggregator

#GoldenGate Microservices (3 of 5) … Distribution Service

DBASolved - Mon, 2017-09-25 09:44

This is post 3 of a 5 part post related to Oracle GoldenGate 12.3 Microservices. This series of posts will provide some details over the new graphical user interface (GUI) that has been built into Oracle GoldenGate 12.3.

The third part of setting up the replication enviornment with Microservices is to understand what the Distributuion Server is. The component is the replacement for the Data Pump Extract in classic architecture. Distribution Server does all the same things as the Data Pump Extract except with from transformations. If you use transformations in the Data Pump, you really should relook where this is being done.

Note: Transformations should be done at the source, target, or with Oracle Data Integrator.

Distribution Server is your path way for shipping trail files from source to target in the Microservices architecture. Let’s take a closer look at this service.

Upon accessing the Distribution Server page, you see a blank page (Figure 1). This is the overview page and where you will configure you distribution paths needed for replication.

Figure 1:

To begin adding a distribution path, click the plus ( + ) sign on the right hand side of the page. This will take you to the Add Path page (Figure 2). On the Add Path page, you will provide the details for the path. These details include extract/trail file to read from, protocol to use for transmissions, log positioning and many other options. A lot of the items you could do in the data pump extract, can be configured on this page as well.

Figure 2:

After adding a distribution path, you will be able to see clearly where the transactions are coming from and being shipped to (Figure 3). You will also see what database, extract, and trail if being used for the transactions.

Figure 3:

If you click on the “Action” button, you are taken to a menu where you can perform operations on the Distribution Path. There is an option for “Details”. By looking at the details, you can quickly see, in more detail, what is happening with the Distrubtuion Path. You can see the source and target, database, extract, trail name, source trail file, as well as many other item (Figure 4). Additonally, on the details page, you can see the statistics for what is running through the Distribution path.

Figure 4:

Lastly, if you return to the context menu on the left hand side, you will see an option for “Diagnosis & Reports”. Here is where you will be able to take a look at the report file for the Distribution Server (Figure 5). This page provides you with a date sorted view of the log file for easy reading. You will also be able to quickly find items that are wrong or warnings by the color indicator used with the log file.

Figure 5:


Filed under: Golden Gate
Categories: DBA Blogs

TNS could not resolve the service name

Tom Kyte - Mon, 2017-09-25 09:26
I have been trying to connect SQL Plus, Report Builder but getting error "TNS could not resolve the service name " In SQL developer i tried with two connections name it worked ,below is the detail : Connection name : MyConn user : system passs:...
Categories: DBA Blogs

Is there a risk in continuing to develop using plsql web toolkit?

Tom Kyte - Mon, 2017-09-25 09:26
Hi Tom, I hope you can help or point me in the right direction. I have been asked to evaluate an in-house developed system that is written entirely in plsql/web toolkit using mod_plsql. This is a heavily used service, sometimes with very high conc...
Categories: DBA Blogs

String buffer into Oracle (utl, xml)

Tom Kyte - Mon, 2017-09-25 09:26
<code>I have procedure to create xml file and save it on disc directory: create or replace procedure test_write_xml_data_to_file (p_directory varchar2, p_file_name varchar2) as v_file UTL_FILE.FILE_TYPE; v_amount INTEGER := 32767; v_xml_dat...
Categories: DBA Blogs

Error while inserting data through SQL Loader

Tom Kyte - Mon, 2017-09-25 09:26
Hi All, We are using SQL loader in our .Net application to insert data into oracle Data base. We are inserting through bulk insert process. We have another Stored procedure in Oracle which deletes data from the same table where insertion happe...
Categories: DBA Blogs

Using Interval Datatype

Tom Kyte - Mon, 2017-09-25 09:26
I have the following query where TS_END and TS_START are DATE columns. I used to run this query to give me the average response time in seconds: SELECT COUNT(*) system_process_count, TRUNC(AVG(lgrt.ts_end - lgrt.ts_start)*24*60*60, 1) FRO...
Categories: DBA Blogs

Chartis Names Oracle Category Leader for Financial Crime Risk Management

Oracle Press Releases - Mon, 2017-09-25 08:00
Press Release
Chartis Names Oracle Category Leader for Financial Crime Risk Management Oracle is only company identified by Chartis as a category leader across all five proficiencies including anti-money laundering

Redwood Shores Calif—Sep 25, 2017

Oracle Financial Crime and Compliance Management (FCCM) has been named a category leader for solutions in Anti-Money Laundering (AML), Enterprise Fraud, Know Your Customer (KYC), Watchlist Monitoring and Trader Surveillance by Chartis Research. Oracle is the only company to be identified by Chartis as a category leader in all five financial crime risk management categories.
“Our placement on the Chartis Research report re-affirms our status as a best-in-class provider with an unmatched array of services, and is reflective of the meticulous attention we place on our FCCM solutions,” said Oracle Financial Services Analytical Applications Group Vice President and General Manager Ambreesh Khanna. “We are at a time of renewed focus on anti-money laundering precautions and we take great pride in assuaging customers’ concerns through exemplary offerings. Oracle offers an un-paralleled, comprehensive solution set for financial crime risk management systems and we look forward to further strengthening this toolset.”
“I’m extremely pleased that our intense focus on improving detection effectiveness and the efficiency of case investigations is being validated in the industry,” said Oracle Financial Crime and Compliance Management Vice President Sunil Mathew. “There is a great new interest in applying cutting-edge detection technologies such as Machine Learning and Graph Analytics across all of these risk areas and having the strength of Oracle’s technology R&D behind our applications team has helped differentiate our approach in the marketplace."
The Chartis Financial Crime Risk Management Systems Market Update 2017 analyzes an environment that faces a significant level of disruption from new vendors, many of which are exploiting new technologies. The Chartis methodology takes into account the full breadth of risk management tools truly needed to run an enterprise-wide solution, taking a holistic view of the technology, tools and techniques used.
The report also features the Chartis RiskTech QuadrantTM to better illustrate vendor landscape. This quadrant utilizes a comprehensive methodology formed through in-depth independent research and a scoring system that explains which technology solutions best meet an organization’s needs. It provides an exhaustive and in-depth look at the best AML, fraud technology, KYC, watchlist monitoring and trader surveillance solutions available in today’s marketplace, with Oracle ranking as a category leader in every section.
“Oracle demonstrates strong, across-the-board leadership, emerging as a category leader in all five risk categories we investigated,” said Sid Dash, Research Director at Chartis Research. “They demonstrated a strong and well-focused set of product functionality and technological capability, along with impressive breadth and depth of services and delivery capacity, backed up by a strong support infrastructure.”
Download an executive summary of the Chartis Financial Crime Risk Management Systems Market Update 2017 Report here.
Contact Information
Alex Moriconi
About Oracle
The Oracle Cloud delivers hundreds of SaaS applications and enterprise-class PaaS and IaaS services to customers in more than 195 countries and territories while processing 55 billion transactions a day. For more information about Oracle (NYSE:ORCL), please visit us at http://cloud.oracle.com.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
About Chartis
Chartis is the leading provider of research and analysis on the global market for risk technology, and is part of InfoPro Digital. Chartis's goal is to support enterprises as they drive business performance through better risk management, corporate governance and compliance, and to help clients make informed technology and business decisions by providing in-depth analysis and actionable advice on virtually all aspects of risk technology.
RiskTech Quadrant®, RiskTech100® and FinTech QuadrantTM are registered trademarks of Chartis Research (http://www.chartis-research.com).

Solaris : How to Change/Rename Guest Domain Name (LDOM Name)

Online Apps DBA - Mon, 2017-09-25 02:46

Steps Save the current guest domain configuration into an XML file: Shutdown and un-bind the Guest LDOM: Copy the XML file as the new LDOM XML file and edit file to replace the Old Name to New Name: Destroy the Old Guest LDOM: Create the new Guest LDOM using the edited XML file: Bind and […]

The post Solaris : How to Change/Rename Guest Domain Name (LDOM Name) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Cloud Announcement: More Automation. Lower Prices.

This week Larry Ellison, Executive Chairman and CTO of Oracle, announced two new programs designed to increase flexibility and automation in the way customers can purchase and consume their Oracle...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How to create Teradata Server Dashboard

Nilesh Jethwa - Sun, 2017-09-24 15:10

Teradata delivers better business outcomes through technology-enabled solutions in the areas that matter most ? from operational excellence and asset optimization, to customer experience and product innovation, to finance transformation and risk mitigation. It works with leading businesses in over 75 countries worldwide including many of the top performers and best-known brands in telecom, transportation, consumer packaged goods, financial services and manufacturing.

Are you using Teradata Server for your data marts or data-warehouse? If so, build your Free Teradata Server infocaptor dashboard software.

Read more at http://www.infocaptor.com/ice-database-connect-dashboard-to-teradata-sql

how the SQL query is processed

Tom Kyte - Sun, 2017-09-24 15:06
How the SQL query is processed in detail
Categories: DBA Blogs

Range constraint

Tom Kyte - Sun, 2017-09-24 15:06
Hi! First i want to thank you for the many Responses that where usefull for me. I need a constraint or trigger that controls if the row inserted or updated contains a range of numbers that is already contained in another row of the same table. ...
Categories: DBA Blogs

oracle plsql data migration

Tom Kyte - Sun, 2017-09-24 15:06
Hi Tom, i need to write a plsql program, Suppose there are 10 sup tables with different columns in each table and 1 target table which have columns same as combined all source columns. i need to write a procedure which will extracts the data from...
Categories: DBA Blogs

temp undo sizing

Tom Kyte - Sun, 2017-09-24 15:06
Hi team, I have One question - 1. Suppose i have 1TB DB Size so What our recommendation in term of -sizing of database memory. -Temporary tablespace -Undo tablespace Thanks
Categories: DBA Blogs

Convert sys_guid to number

Tom Kyte - Sun, 2017-09-24 15:06
Hi Tom I read some articles about to_number(sys_guid()), but You suggest not to use it. Why Apex QuickSQL suggests to_number(sys_guid()) for unique object-ID ? Example create or replace trigger country_biu before insert or update ...
Categories: DBA Blogs

Am I a DBA 3.0 or just an SQL*DBA?

Yann Neuhaus - Sun, 2017-09-24 13:47

There are currently a lot of new buzz words and re-namings which suggest that our DBA role is changing, most of them escorted with a #cloud hashtag. Oracle Technology Network is now called Oracle Developer Community. Larry Ellison announced the database that does not need to be operated by humans. And people talking about the death of DBA, about the future of DBA, about DBA 3.0,…

Those are all generalizations and universal statements, and I don’t like generalizations. There is not only one type of DBA role. The DBA role in big US companies (where most of those claims come from) is very different than the DBA role in European medium companies (where I’m doing most of my job). The only thing I like with generalization is that a simple counterexample is sufficient to prove that the universal statement is wrong. And I’ll take the example I know the best: mine.

What do you call DBA role? Is it only server management, or database management? Are you a Dev DBA or an Ops DBA? And when you will go multitenant, will you become a CDB DBA or PDB DBA? And on Engineered Systems, are you still a Database administrator or a Database Machine administrator?

So here is my experience. Let’s flashback to sysdate-8000.
My first job with an Oracle Database was in 1994. I was working at Alcatel in Paris. The IT was typical of big companies at that time: all IBM, databases were DB2 on mainframe and applications were developed through a several years waterfall cycle from specifications to production. But I was not working there. I was in the accounting department which had his own little IT with some Borland Paradox small applications. This IT department was one senior person, from whom I’ve learned everything, and me, junior developer doing some Borland Paradox things. When came the need for a new application, the idea was to build a prototype in this ‘rebel’ IT service rather than waiting for the whole cycle of development managed by the official IT department.

mainpictWe asked SUN to lend us a workstation. We asked Oracle to lend us Oracle 7.1 database. That was not difficult. Both of them were happy to try to come into this all-IBM company by another way. And Borland had a new product: Delphi so this is where I started to build the prototype. I had everything to learn there: I had never installed a Unix system, I had never installed a database, I even never configured a TCP/IP network. But with the books (no internet then) and the help of my manager (did I say I owe him everything?) we got the environment ready within one month.

aba590f96f7b8138deb71fe28526fb93Today we are talking about Cloud PaaS as the only way to get quickly an environment to start a new development project. The marketing explains that you can get the environment with a few clicks and operational one hour later. But in real life, I know several projects where the environment is not ready after one month, for different reasons (time to choose which service, evaluate the cost, set-it up). Remember that in my case, 23 years ago, it took one or two months but I had a full server, enough storage, available 24/7, with the workstation on my desk. And all that for free.

CaptureDelphiSo I started to develop the application. The business users were there in next room. A short meeting, a small doc, and the conception of the first prototype was ready. Development with Delphi was really quick (remember RAD – Rapid Application Development?) and as soon as I had a usable prototype, one user had access to it, giving me their feedback for future evolutions. We have built something very clever: easy to evolve, fit to business needs, with good performance, and easy to deploy. It has been replaced years later by an application provided by the IT department, but our application was used a the specification.

So, what was my role here? I was clearly a developer and not a DBA. But I was already designing a system, installing a server, creating a database, modeling the data and analyzing performance. When interacting with the database, I was just a DBA doing some SQL. If I want to invent new words myself, I would call that an SQL*DBA, like the name of the tool that was replaced at that time by svrmgrl for the DBA stuff related to the server management. All that has been consolidated into the same tools later: sqlplus does the DBA and Developer stuff, Enterprise manager does both, SQL Developer does both…

During the 20 years later, I’ve evolved to a DBA. I’ve learned new technologies each time, but I don’t think that my DBA role has changed. I’ve done BI (called ‘infocenter’ at that time and ‘datawarehouse’ later) with Business Objects and DB2 and Data Propagator (yes, logical replication and that was in 1996). All this was designed in cooperation with the business end-users. In 2001 I’ve managed terabyte databases full of numbers, doing what we call Big Data today. All maintenance tasks (capacity planning, upgrade, recovery) were done very closely to the business utilization of this data. I administered telecom databases at a time where mobile phone providers came with a new idea every month to be implemented. We would call that ‘agile’ today. I’ve setup databases for quick cloning with transportable tablespaces. I’ve setup continuous integration tests of databases based on flashback technologies and workspace manager. I’ve configured parallel query and XMLDB to do analytics on unstructured data, with better results than MapReduce PoC. Technology evolves, names are modernized, but my DBA role is the same and I still use SQL.

The latest I’ve read about this changing role is Penny Avril interview and it is a very good explanation of all those changes announced. I totally agree with all of that. Except that I see no change in my role there. Let’s take this: DBAs are being asked to understand what businesses do with data rather than just the mechanics of keeping the database healthy and running.
I have always known which business users and which business cases are interacting with the database. My resume always mentioned the business area of each project. I’ve always interacted with end-users and developers for any database I administered, whether there are DEV, TEST or PROD databases. You cannot setup a backup/recovery plan without knowing the data and the business requirements. You cannot upgrade or migrate without interacting with users to test and validate the migration. You cannot address performance without interacting with users and developers. You cannot size the SGA without knowing how the data is used, at the different times of the day or the week.You cannot keep database healthy without knowing how it is used. You cannot build an infrastructure architecture without the support of the business for the software costs.

My DBA job is not a mechanics to keep processes running on a server. That would be a Database System administrator. But we are talking about DataBase Administrator. The major part of my job, and the main reason why I like it, is the human interaction that is around the database. You talk to server/storage/network administrators, you talk to all kind of business users, you talk to developers, you talk to managers, you talk to vendors,… You have to understand OS schedulers, network security, and mutexes, and also have to understand banking, retail, hospital data workflow. Then I don’t worry about the self-driven/no-human-labor part of the DBA role that may be moved to be managed by the cloud provider. Those are boring things that we already automated long time ago. For example, at dbi-services we have included all this automation into the DMK. And this goes further for open source databases with the OpenDB Appliance. Do you think a consulting company would provide this for free to their customers if this automation takes all the DBA job out? The boring and recurring things are automated to avoid errors, and all the intelligent stuff is provided by experienced human DBAs talking SQL with the system. As always.


Cet article Am I a DBA 3.0 or just an SQL*DBA? est apparu en premier sur Blog dbi services.

Oracle SOA Suite and WebLogic: Overview of key and keystore configuration

Amis Blog - Sun, 2017-09-24 09:31

Keystores and the keys within can be used for security on the transport layer and application layer in Oracle SOA Suite and WebLogic Server. Keystores hold private keys (identity) but also public certificates (trust). This is important when WebLogic / SOA Suite acts as the server but also when it acts as the client. In this blog post I’ll explain the purpose of keystores, the different keystore types available and which configuration is relevant for which keystore purpose.

Why use keys and keystores?

The below image (from here) illustrates the TCP/IP model and how the different layers map to the OSI model. When in the below elaboration, I’m talking about the application and transport layers, I mean the TCP/IP model layers and more specifically for HTTP.

The two main reasons why you might want to employ keystores are that

  • you want to enable security measures on the transport layer
  • you want to enable security measures on the application layer

Almost all of the below mentioned methods/techniques require the use of keys and you can imagine the correct configuration of these keys within SOA Suite and WebLogic Server is very important. They determine which clients can be trusted, how services can be called and also how outgoing calls identity themselves.

You could think transport layer and application layer security are two completely separate things. Often they are not that separated though. The combination of transport layer and application layer security has some limitations and often the same products / components are used to configure both.

  • Double encryption is not allowed. See here. ‘U.S. government regulations prohibit double encryption’. Thus you are not allowed to do encryption on the transport layer and application layer at the same time. This does not mean you cannot do this though, but you might encounter some product restrictions since, you know, Oracle is a U.S. company.
  • Oracle Webservice Manager (OWSM) allows you to configure policies that perform checks if transport layer security is used (HTTPS in this case) and is also used to configure application level security. You see this more often that a single product is used to perform both transport layer and application layer security. For example also API gateway products such as Oracle API Platform Cloud Service.
Transport layer (TLS)

Cryptography is achieved by using keys from keystores. On the transport layer you can achieve

You can read more on TLS in SOA Suite here.

Application layer

On application level you can achieve similar feats (authentication, integrity, security, reliability), however often more fine grained such as for example on user level or on a specific part of a message instead of on host level or for the entire connection. Performance is usually not as good as with transport layer security because the checks which need to be performed, can require actual parsing of messages instead of securing the transport (HTTP) connection as a whole regardless of what passes through. The implementation depends on the application technologies used and is thus quite variable.

  • Authentication by using security tokens such as for example
    • SAML. SAML tokens can be used in WS-Security headers for SOAP and in plain HTTP headers for REST.
    • JSON Web Tokens (JWT) and OAuth are also examples of security tokens
    • Certificate tokens in different flavors can be used which directly use a key in the request to authenticate.
    • Digest authentication can also be considered. Using digest authentication, a username-password token is created which is send using WS-Security headers.
  • Security and reliability by using message protection. Message protection consists of measures to achieve message confidentiality and integrity. This can be achieved by
    • signing. XML Signature can be used for SOAP messages and is part of the WS Security standard. Signing can be used to achieve message integrity.
    • encrypting. Encrypting can be used to achieve confidentiality.
Types of keystores

There are two types of keystores in use in WebLogic Server / OPSS. JKS keystores and KSS keystores. To summarize the main differences see below table:


There are JKS keystores. These are Java keystores which are saved on the filesystem. JKS keystores can be edited by using the keytool command which is part of the JDK. There is no direct support for editing JKS keystores from WLST, WebLogic Console or Fusion Middleware Control. You can use WLST however to configure which JKS file to use. For example see here

cd ('Servers/myserver/ServerMBean/myserver')



Keys in JKS keystores can have passwords as can keystores themselves. If you use JKS keystores in OWSM policies, you are required to configure the key passwords in the credential store framework (CSF). These can be put in the map: oracle.wsm.security and can be called: keystore-csf-key, enc-csf-key, sign-csf-key. Read more here. In a clustered environment you should make sure all the nodes can access the configured keystores/keys by for example putting them on a shared storage.


OPSS also offers KeyStoreService (KSS) keystores. These are saved in a database in an OPSS schema which is created by executing the RCU (repository creation utility) during installation of the domain. KSS keystores are the default keystores to use since WebLogic Server 12.1.2 (and thus for SOA Suite since 12.1.3). KSS keystores can be configured to use policies to determine if access to keys is allowed or passwords. The OWSM does not support using a KSS keystore which is protected with a password (see here: ‘Password protected KSS keystores are not supported in this release’) thus for OWSM, the KSS keystore should be configured to use policy based access.

KSS keys cannot be configured to have a password and using keys from a KSS keystore in OWSM policies thus do not require you to configure credential store framework (CSF) passwords to access them. KSS keystores can be edited from Fusion Middleware Control, by using WLST scripts or even by using a REST API (here). You can for example import JKS files quite easily into a KSS store with WLST using something like:

svc = getOpssService(name='KeyStoreService')
svc.importKeyStore(appStripe='mystripe', name='keystore2', password='password',aliases='myOrakey', keypasswords='keypassword1', type='JKS', permission=true, filepath='/tmp/file.jks')
Where and how are keystores / keys configured

As mentioned above, keys within keystores are used to achieve transport security and application security for various purposes. If we translate this to Oracle SOA Suite and WebLogic Server.

Transport layer Incoming
  • Keys are used to achieve TLS connections between different components of the SOA Suite such as Admin Servers, Managed Servers, Node Managers. The keystore configuration for those can be done from the WebLogic Console for the servers and manually for the NodeManager. You can configure identity and trust this way and if the client needs to present a certificate of its own so the server can verify its identity. See for example here on how to configure this.
  • Keys are used to allow clients to connect to servers via a secure connection (in general, so not specific for communication between WebLogic Server components). This configuration can be done in the same place as above, with the only difference that no manual editing of files on the filesystem is required (since no NodeManager is relevant here).

Outgoing Composites (BPEL, BPM)

Keys are be used to achieve TLS connections to different systems from the SOA Suite. The SOA Suite acts as the client here. The configuration of identity keystore can be done from Fusion Middleware Control by setting the KeystoreLocation MBean. See the below image. Credential store entries need to be added to store the identity keystore password and key password. Storing the key password is not required if it is the same as the keystore password. The credential keys to create for this are: SOA/KeystorePassword and SOA/KeyPassword with the user being the same as the keyalias from the keystore to use). In addition components also need to be configured to use a key to establish identity. In the composite.xml a property oracle.soa.two.way.ssl.enabled can be used to enable outgoing two-way-ssl from a composite.

Setting SOA client identity store for 2-way SSL


Specifying the SOA client identity keystore and key password in the credential store

Service Bus

The Service Bus configuration for outgoing SSL connections is quite different from the composite configuration. The following blog here describes the locations where to configure the keystores and keys nicely. In WebLogic Server console, you create a PKICredentialMapper which refers to the keystore and also contains the keystore password configuration. From the Service Bus project, a ServiceKeyProvider can be configured which uses the PKICredentialMapper and contains the configuration for the key and key password to use. The ServiceKeyProvider configuration needs to be done from the Service Bus console since JDeveloper can not resolve the credential mapper.

To summarize the above:

Overwriting keystore configuration with JVM parameters

You can override the keystores used with JVM system parameters such as javax.net.ssl.trustStore, javax.net.ssl.trustStoreType, javax.net.ssl.trustStorePassword, javax.net.ssl.keyStore, javax.net.ssl.keyStoreType, javax.net.ssl.keyStorePassword in for example the setDomainEnv script. These will override the WebLogic Server configuration and not the OWSM configuration (application layer security described below). Thus if you specify for example an alternative truststore by using the command-line, this will not influence HTTP connections going from SOA Suite to other systems. Even when message protection (using WS-Security) has been enabled, which uses keys and check trust. It will influence HTTPS connections though. For more detail on the above see here.

Application layer

Keys can be used by OWSM policies to for example achieve message protection on the application layer. This configuration can be done from Fusion Middleware Control.

The OWSM run time does not use the WebLogic Server keystore that is configured using the WebLogic Server Administration Console and used for SSL. The keystore which OWSM uses by default is kss://owsm/keystore since 12.1.2 and can be configured from the OWSM Domain configuration. See below for the difference between KSS and JKS keystores.

OWSM keystore contents and management from FMW Control

OWSM keystore domain config

In order for OWSM to use JKS keystores/keys, credential store framework (CSF) entries need to be created which contain the keystore and key passwords. The OWSM policy configuration determines the key alias to use. For KSS keystores/keys no CSF passwords to access keystores/keys are required since OWSM does not support KSS keystores with password and KSS does not provide a feature to put a password on keys.

Identity for outgoing connections (application policy level, e.g. signing and encryption keys) is established by using OWSM policy configuration. Trust for SAML/JWT (secure token service and client) can be configured from the OWSM Domain configuration.

Finally This is only the tip of the iceberg

There is a lot to tell in the area of security. Zooming in on transport and application layer security, there is also a wide range of options and do’s and don’ts. I have not talked about the different choices you can make when configuring application or transport layer security. The focus of this blog post has been to provide an overview of keystore configuration/usage and thus I have not provided much detail. If you want to learn more on how to achieve good security on your transport layer, read here. To configure 2-way SSL using TLS 1.2 on WebLogic / SOA Suite, read here. Application level security is a different story altogether and can be split up in a wide range of possible implementation choices.

Different layers in the TCP/IP model

If you want to achieve solid security, you should look at all layers of the TCP/IP model and not just at the transport and application layer. Thus it also helps if you use different security zones, divide your network so your development environment cannot by accident access your production environment or the other way around.

Final thoughts on keystore/key configuration in WebLogic/SOA Suite

When diving into the subject, I realized using and configuring keys and keystores can be quite complex. The reason for this is that it appears that for every purpose of a key/keystore, configuration in a different location is required. It would be nice if that was it, however sometimes configuration overlaps such as for example the configuration of the truststore used by WebLogic Server which is also used by SOA Suite. This feels inconsistent since for outgoing calls, composites and service bus use entirely different configuration. It would be nice if it could be made a bit more consistent and as a result simpler.

The post Oracle SOA Suite and WebLogic: Overview of key and keystore configuration appeared first on AMIS Oracle and Java Blog.

Schema On Read?

Dylan's BI Notes - Sun, 2017-09-24 07:48
I saw “create external table ” first in Oracle DBMS 11G. It was created for the purpose of loading data. When Hive was introduced, a lot of data were already created in HDFS. Hive was introduced to provided the SQL interface on these data. Using the external table concept is a nature of the design.  […]
Categories: BI & Warehousing

Excellent reference for understanding technical terms related todays IT….

OracleApps Epicenter - Sat, 2017-09-23 22:01
Here is the Ultimate Fin Tech Glossary got a fwd email (source internet).
Categories: APPS Blogs

Grid Infrastructure CRS_SWONLY Silent Install

Michael Dinh - Sat, 2017-09-23 18:00

Configuring Software Binaries for Oracle Grid Infrastructure for a Standalone Server


[root@arrow1 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),994(vboxsf)

[root@arrow1 ~]# mkdir /u01
[root@arrow1 ~]# chmod 775 /u01/
[root@arrow1 ~]# chown oracle:oinstall /u01/

[root@arrow1 ~]# vi /etc/oraInst.loc
[root@arrow1 ~]# chown oracle:oinstall /etc/oraInst.loc

[root@arrow1 ~]# cat /etc/oraInst.loc

[root@arrow1 ~]# ll /etc/oraInst.loc
-rw-r--r-- 1 oracle oinstall 56 Sep 23 17:11 /etc/oraInst.loc
[root@arrow1 ~]# exit

[oracle@arrow1 ~]$ mkdir -p /u01/app/
[oracle@arrow1 ~]$ mkdir -p /u01/app/grid
[oracle@arrow1 ~]$ mkdir -p /u01/app/oraInventory


[oracle@arrow1 ~]$ ls -l /sf_OracleSoftware/
total 6297251
-rwxrwxrwx 1 vagrant vagrant 3453696911 Apr 12 09:44 linuxx64_12201_database.zip
-rwxrwxrwx 1 vagrant vagrant 2994687209 Apr 12 09:43 linuxx64_12201_grid_home.zip

[oracle@arrow1 ~]$ unzip -d /u01/app/ -qo /sf_OracleSoftware/; echo $?

[oracle@arrow1 ~]$ ls /u01/app/
addnode     crs     deinstall    gpnp           inventory  lib      opmn     oui      qos       rootupgrade.sh  srvm      welcome.html
assistants  css     demo         gridSetup.sh   javavm     log      oracore  owm      racg      runcluvfy.sh    suptools  wlm
bin         cv      diagnostics  has            jdbc       md       ord      perl     rdbms     scheduler       tomcat    wwg
cdata       dbjava  dmu          hs             jdk        network  ordim    plsql    relnotes  slax            ucp       xag
cha         dbs     env.ora      install        jlib       nls      ords     precomp  rhp       sqlpatch        usm       xdk
clone       dc_ocm  evm          instantclient  ldap       OPatch   oss      QOpatch  root.sh   sqlplus         utl

[oracle@arrow1 ~]$ ls /u01/app/
grid_2017-01-26_04-10-28PM.rsp  gridsetup.rsp  sample.ccf
[oracle@arrow1 ~]$


$ cp -v gridsetup.rsp grid_crs_swonly.rsp
‘gridsetup.rsp’ -> ‘grid_crs_swonly.rsp’

$ diff -iwyB --suppress-common-lines -W 150 gridsetup.rsp grid_crs_swonly.rsp; echo
oracle.install.option=                                                    |     oracle.install.option=CRS_SWONLY
ORACLE_BASE=                                                              |     ORACLE_BASE=/u01/app/grid
oracle.install.asm.OSDBA=                                                 |     oracle.install.asm.OSDBA=dba
oracle.install.asm.OSOPER=                                                |     oracle.install.asm.OSOPER=dba
oracle.install.asm.OSASM=                                                 |     oracle.install.asm.OSASM=dba


$ ./runcluvfy.sh stage -pre hacfg

Verifying Physical Memory ...FAILED (PRVF-7530)
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...PASSED
Verifying Free Space: arrow1:/usr,arrow1:/var,arrow1:/etc,arrow1:/sbin,arrow1:/tmp ...PASSED
Verifying User Existence: oracle ...
  Verifying Users With Same UID: 54321 ...PASSED
Verifying User Existence: oracle ...PASSED
Verifying Group Existence: dba ...PASSED
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: oinstall(Primary) ...PASSED
Verifying Group Membership: dba ...PASSED
Verifying Run Level ...PASSED
Verifying Hard Limit: maximum open file descriptors ...PASSED
Verifying Soft Limit: maximum open file descriptors ...PASSED
Verifying Hard Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum stack size ...PASSED
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils- ...PASSED
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
Verifying Package: sysstat-10.1.5 ...PASSED
Verifying Package: ksh ...PASSED
Verifying Package: make-3.82 ...PASSED
Verifying Package: glibc-2.17 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-6.2-4 ...PASSED
Verifying Package: net-tools-2.0-0.17 ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...PASSED

Pre-check for Oracle Restart configuration was unsuccessful.

Failures were encountered during execution of CVU verification request "stage -pre hacfg".

Verifying Physical Memory ...FAILED
arrow1: PRVF-7530 : Sufficient physical memory is not available on node
        "arrow1" [Required physical memory = 8GB (8388608.0KB)]

CVU operation performed:      stage -pre hacfg
Date:                         Sep 23, 2017 5:22:21 PM
CVU home:                     /u01/app/
User:                         oracle


$ ./gridSetup.sh -silent -waitforcompletion -skipPrereqs -responseFile /u01/app/
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-41808] Possible invalid choice for OSASM Group.
   CAUSE: The name of the group you selected for the OSASM group is commonly used to grant other system privileges (For example: asmdba, asmoper, dba, oper).
   ACTION: Oracle recommends that you designate asmadmin as the OSASM group.
[WARNING] [INS-41809] Possible invalid choice for OSDBA Group.
   CAUSE: The group name you selected as the OSDBA for ASM group is commonly used for Oracle Database administrator privileges.
   ACTION: Oracle recommends that you designate asmdba as the OSDBA for ASM group, and that the group should not be the same group as an Oracle Database OSDBA group.
[WARNING] [INS-41810] Possible invalid choice for OSOPER Group.
   CAUSE: The group name you selected as the OSOPER for ASM group is commonly used for Oracle Database administrator privileges.
   ACTION: Oracle recommends that you designate asmoper as the OSOPER for ASM group, and that the group should not be the same group as an Oracle Database OSOPER group.
[WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
   CAUSE: The group you selected for granting the OSDBA for ASM group for database access, and the OSOPER for ASM group for startup and shutdown of Oracle ASM, is the same group as the OSASM group, whose members have SYSASM privileges on Oracle ASM.
   ACTION: Choose different groups as the OSASM, OSDBA for ASM, and OSOPER for ASM groups.
You can find the log of this install session at:

As a root user, execute the following script(s):
        1. /u01/app/

Execute /u01/app/ on the following nodes:

Successfully Setup Software.

RUN root.sh

[root@arrow1 ~]# /u01/app/
Check /u01/app/ for the output of root script

[root@arrow1 ~]# cat /u01/app/
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

To configure Grid Infrastructure for a Cluster or Grid Infrastructure for a Stand-Alone Server execute the following command as oracle user:
This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

[root@arrow1 ~]#

This is where Oracle has take a turn for the worse.
Instructions are not provided after running root.sh and are incorrect!

AS ROOT RUN roothas.pl

[root@arrow1 ~]# /u01/app/ -I /u01/app/ -I /u01/app/ /u01/app/
Using configuration parameter file: /u01/app/
The log of current session can be found at:
2017/09/23 18:11:35 CLSRSC-363: User ignored prerequisites during installation
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node arrow1 successfully pinned.
2017/09/23 18:11:44 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'arrow1'
CRS-2673: Attempting to stop 'ora.evmd' on 'arrow1'
CRS-2677: Stop of 'ora.evmd' on 'arrow1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'arrow1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.

arrow1     2017/09/23 18:12:54     /u01/app/     0
2017/09/23 18:12:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
[root@arrow1 ~]#

AS ORACLE runInstaller -updateNodeList

$ cd oui/bin/

$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/ -defaultHomeName CLUSTER_NODES= CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4080 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.


$ ps -ef|grep oracle
root      3796  1125  0 17:03 ?        00:00:00 sshd: oracle [priv]
oracle    3798  3796  0 17:03 ?        00:00:01 sshd: oracle@pts/2
oracle    3799  3798  0 17:03 pts/2    00:00:00 -bash
root     13572  1125  0 17:39 ?        00:00:00 sshd: oracle [priv]
oracle   13574 13572  0 17:39 ?        00:00:00 sshd: oracle@pts/0
oracle   13575 13574  0 17:39 pts/0    00:00:00 -bash
oracle   25084     1  0 18:12 ?        00:00:03 /u01/app/ reboot
oracle   25197     1  0 18:12 ?        00:00:02 /u01/app/
oracle   25215     1  0 18:12 ?        00:00:01 /u01/app/
oracle   25269 25215  0 18:12 ?        00:00:01 /u01/app/ -o /u01/app/[HOSTNAME]/evmd/evmlogger.info -l /u01/app/[HOSTNAME]/evmd/evmlogger.log
oracle   25404  3799  0 18:27 pts/2    00:00:00 ps -ef
oracle   25405  3799  0 18:27 pts/2    00:00:00 grep --color=auto oracle


$ export LOG=/u01/app/oraInventory/logs/GridSetupActions2017-09-23_05-52-14PM/gridSetupActions2017-09-23_05-52-14PM.log

$ grep -e '[[:upper:]]: ' $LOG |cut -d ":" -f1|sort -u

$ grep -e '[[:upper:]]: ' $LOG|grep "^WARNING: "
WARNING:  [Sep 23, 2017 5:52:18 PM] Unable to find the namespace URI. Reason: Start of root element expected.
WARNING:  [Sep 23, 2017 5:52:18 PM] Unable to find the namespace URI. Reason: Start of root element expected.
WARNING:  [Sep 23, 2017 5:52:21 PM] Validation disabled for the state init
WARNING:  [Sep 23, 2017 5:52:26 PM] Validation disabled for the state flowDecider
WARNING:  [Sep 23, 2017 5:52:26 PM] Validation disabled for the state CRSNodeInformationUI
WARNING:  [Sep 23, 2017 5:52:26 PM] No Local Domain found, setting to whatever first domain is available
WARNING:  [Sep 23, 2017 5:52:26 PM] No Domain found, skipping Same Domain validation
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41808] Possible invalid choice for OSASM Group.
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41809] Possible invalid choice for OSDBA Group.
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41810] Possible invalid choice for OSOPER Group.
WARNING:  [Sep 23, 2017 5:52:27 PM] [WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group.
WARNING:  [Sep 23, 2017 5:52:29 PM] Validation disabled for the state prereqExecutionDecider
WARNING:  [Sep 23, 2017 5:52:29 PM] Validation disabled for the state showSummary
WARNING:  [Sep 23, 2017 5:52:29 PM] Unable to find the namespace URI. Reason: /u01/app/ (No such file or directory)
WARNING:  [Sep 23, 2017 5:53:50 PM] Validation disabled for the state finish
WARNING:  [Sep 23, 2017 5:53:50 PM] Could not create directory: /u01/app/oraInventory/logs/GridSetupActions2017-09-23_05-52-14PM


Subscribe to Oracle FAQ aggregator