Feed aggregator

Oracle Linux kernel blogs

Wim Coekaerts - Mon, 2018-02-12 10:50

Don't forget to check the Linux kernel team's blog. We're having a regular cadence now to write up things that are hopefully interesting. Projects the developers are working on or have worked on etc...

 

Confluent Partnership

Rittman Mead Consulting - Mon, 2018-02-12 09:14

Confluent

Here at Rittman Mead, we are continually broadening the scope and expertise of our services to help our customers keep pace with today's ever-changing technology landscape. One significant change we have seen over the last few years is the increased adoption of data streaming. These solutions can help solve a variety of problems, from real-time data analytics to forming the entire backbone of an organisation's data architecture. We have worked with a number of different technologies that can enable this, however, we often see that Kafka ticks the most boxes.

This is reflected by some of the recent blog posts you will have seen like Tom Underhill hooking up his gaming console to Kafka and Paul Shilling’s piece on collating sailing data. Both these posts try and use day to day or real-world examples to demonstrate some of the concepts behind Kafka.

In conjunction with these, we have been involved in more serious proofs of concepts and project with clients involving Kafka, which no doubt we will write about in time. To help us further our knowledge and also immerse ourselves in the developer community we have decided to become Confluent partners. Confluent was founded by the people who initially developed Kafka at LinkedIn and provides a managed and supported version of Kafka through their platform.

We chose Confluent as we saw them as the driving force behind Kafka, plus the additions they are making to the platform such as the streaming API and KSQL are opening a lot of doors for how streamed data can be used.

We look forward to growing our knowledge and experience in this area and the possibilities that working with both Kafka and Confluent will bring us.

Categories: BI & Warehousing

Index seek operator and residual IO

Yann Neuhaus - Mon, 2018-02-12 09:00

This blog post draws on the previous article about index seek and gotchas. I encountered another interesting case but it is not so much because of the nature of the problem I decided to write this article but rather the different ways that exist to troubleshoot it.

Firstly, let’s set the scene:

A simple update query and its corresponding execution plan that tend to say the plan is efficient  in terms of performance

declare @P0 datetime2
       ,@P1 int,
       @P2 nvarchar(4000)

set @P0 = GETDATE()
set @P1 = 1
set @P2 = '005245'

UPDATE
	[TABLE]
SET
	[_DATE] = @P0
WHERE
	[_ID] = @P1
    AND
    [_IDENTIFIER] = @P2
;

 

blog 129 - 1- query plan before optimization

An index Seek – with a cardinality of 1 – and then a clustered index update operation. A simple case as you may notice here. But although the customer noticed a warning icon in the execution plan, he was confident enough about this plan to take care about it.

But if we look closer at the warning icon it concerns a CONVERT_IMPLICT operation (meaning hidden operation done by the optimizer) that may affect the SeekPlan as stated by the pop up:

blog 129 - 2- query plan warning

In fact, in the context of this customer, this query became an issue when it was executed a thousand times in a very short period leading to trigger lock time out issues and to consume one entire VCPU of the server as well.

Let’s jump quickly to the root cause and the solution. Obviously the root cause concerned the @P2 parameter type here – NVARCHAR(4000) – that forced the optimizer to use an CONVERT_IMPLICT operation which makes the predicate non sargable and an index seek operation inefficient. This is particularly true when the CONVERT_IMPLICIT operation concerns the column in the predicate rather than the leading parameter. Indeed in this case @P2 type is NVARCHAR(4000) whereas the _IDENTIFIER column type is VARCHAR(50) and the optimizer has to convert first the _IDENTIFIER column (with lower precedence type) to match with the @P2 parameter (with the higher precedence type). Let’s say also that the @P1 parameter is not selective enough to prevent a range scan operation in this case.

Although the actual execution plan is showing index seek with a cardinality of 1, I was able to confirm quickly to my customer that the real number was completely overshadowed by using IO and TIME statistics.

  CPU time = 0 ms,  elapsed time = 0 ms.
Table 'TABLE'. Scan count 1, logical reads 17152, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 609 ms,  elapsed time = 621 ms.

 

Well, do you really think that a true index seek may lead to read such number of pages and to consume this amount of CPU per execution? :) … I let you image how intensive this query may be in terms of CPU when executed a lot of times in a very short period …

Another point that puts us quickly on the right track is that the comparison is done in the predicate section of the index seek operation meaning it will be checked after the seek predicate. Thus, the index seek operator acts as an index scan by starting at the first page and keeps running until satisfying the seek predicate.

blog 129 - 3- query plan index seek operator

Referring to the IO statistics output above we may notice the operator read 17152 pages and a quick look at the corresponding index physical stats allowed us to assume safely the storage engine read a big part of the index in this case.

SELECT 
	index_id,
	partition_number,
	index_level,
	page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('TABLE'), NULL, NULL, 'DETAILED')
WHERE index_id = 10

 

Ultimately, we may also benefit from an undocumented trace flag 9130 that highlights in an obvious way how to address this issue by showing a filter operator (predicate section of index seek operator pop up) after the index seek operator itself (reading from the right to the left). From a storage perspective we are far from the first exposed reality.

blog 129 - 5- TF9130

SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.
Table 'table'. Scan count 1, logical reads 6, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 1 ms.

 

Ok my problem was fixed but let’s jump quickly to another interesting point. Previously I used different tools / tricks to identify the “residual” seek operation. But shall I go through all this stuff to identify this kind of issue? Is there an easier way for that? The good news is yes if you run on SQL Server 2012 SP3+ / SQL Server 2014 SP2+ or SQL Server 2016+ because the SQL Server team added runtime information to showplan.

In my context, it may simplify drastically the analysis because runtime statistics are directly exposed per operator. I got interesting information as ActualRowsRead (3104706 rows) and ActualLogicalReads (17149 pages).

Note that if we refer to the IO statistics from the first query where I got 17152 pages for the table TABLE, the missing 3 pages are in fact on the update index operator.

<RelOp AvgRowSize="36" 
						   EstimateCPU="3.28721" 
						   EstimateIO="12.1472" 
						   EstimateRebinds="0" 
						   EstimateRewinds="0" 
						   EstimatedExecutionMode="Row" 
						   EstimateRows="1" 
						   LogicalOp="Index Seek" 
						   NodeId="3" 
						   Parallel="false" 
						   PhysicalOp="Index Seek" 
						   EstimatedTotalSubtreeCost="15.4344" 
						   TableCardinality="5976470">
                      <OutputList>
                        <ColumnReference 
							Database="[DB]" 
							Schema="[dbo]" 
							Table="[TABLE]" 
							Column="_DELIVERED" />
                      </OutputList>
                      <RunTimeInformation>
                        <RunTimeCountersPerThread Thread="0" 
												  ActualRows="1" 
												  ActualRowsRead="3104706" 
												  ActualEndOfScans="1" 
												  ActualExecutions="1" 
												  ActualElapsedms="1209" 
												  ActualCPUms="747" 
												  ActualScans="1" 
												  ActualLogicalReads="17149" 
												  ActualPhysicalReads="1" 
												  ActualReadAheads="17141" 
												  ActualLobLogicalReads="0" 
												  ActualLobPhysicalReads="0" 
												  ActualLobReadAheads="0" />
                      </RunTimeInformation>

 

Finally, if you’re an enthusiastic of the famous Plan Explorer tool like me in order to troubleshoot execution plans, the addition of runtime statistics was also included to this tool accordingly. The warning icons and corresponding popups alert us very quickly about the residual IO issue as shown below:

blog 129 - 6- PlanExplorer execution plan

blog 129 - 6- PlanExplorer

Happy troubleshooting!

 

Cet article Index seek operator and residual IO est apparu en premier sur Blog dbi services.

Virtual Workshop - Security Monitoring & Analysis (SMA) - Configuration & Compliance ...

Security Monitoring and Analysis (SMA) and Configuration Compliance Cloud Services @OMC Tuesday, February 20, 2018 ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Demonstrates Advances in Autonomous Cloud, Extending Autonomous Capabilities Across Entire Cloud Platform

Oracle Press Releases - Mon, 2018-02-12 07:04
Press Release
Oracle Demonstrates Advances in Autonomous Cloud, Extending Autonomous Capabilities Across Entire Cloud Platform Self-driving services use AI and machine learning to help organizations lower cost, reduce risk, accelerate innovation, and get predictive insights

Oracle CloudWorld, NEW YORK—Feb 12, 2018

Oracle President of Product Development Thomas Kurian demonstrated the latest advances in Oracle Cloud Platform, expanding its Oracle Cloud Platform Autonomous Services beyond the Oracle Autonomous Database, to make all Oracle Cloud Platform services self-driving, self-securing and self-repairing. With its enhanced suite of autonomous Cloud Platform services, Oracle is setting a new industry standard for autonomous cloud capabilities. Oracle is applying AI and machine learning to its entire next-generation Cloud Platform services to help customers lower cost, reduce risk, accelerate innovation, and get predictive insights.

As organizations focus on delivering innovation fast, they want a secure set of comprehensive, integrated cloud services to build new applications and run their most demanding enterprise workloads. Only Oracle’s cloud services can automate key operational functions like tuning, patching, backups and upgrades while running to deliver maximum performance, high availability, and secure enterprise IT systems. In addition, to accelerate innovation and smarter decision making, Oracle Cloud Platform is incorporating additional autonomous capabilities specific to application development, mobile and bots, app and data integration, analytics, security and management. 

“The future of tomorrow’s successful enterprise IT organization is in full end-to-end automation,” said Kurian. “At Oracle, we are making this a reality.We are weaving autonomous capabilities into the fabric of our cloud to help customers safeguard their systems, drive innovation faster, and deliver the ultimate competitive advantage with smarter real-time decisions.”

Oracle’s autonomous capabilities are integral to the entire Oracle Cloud Platform, including the world’s first autonomous database unveiled at Oracle OpenWorld. The Oracle Autonomous Database uses advanced AI and machine learning to eliminate human labor, human error and manual tuning delivering unprecedented availability, high performance and security at a much lower cost. Multiple autonomous database services, each tuned to a specific workload, will be available in 2018, including Oracle Autonomous Data Warehouse Cloud Service for analytics, Oracle Autonomous Database OLTP for transactional and mixed workloads, and Oracle Autonomous NoSQL Database for fast, massive-scale reads and writes.

In addition to the Oracle Autonomous Database, Oracle Cloud Platform autonomous capabilities for application development, mobile and bots, integration, analytics, security and system management, are scheduled to be available in the first half of calendar year 2018. Oracle Cloud Platform services all share foundational autonomous capabilities including:

  • Self-Driving to Lower Costs and Increase Productivity: Eliminate human labor to provision, secure, monitor, backup, recover and troubleshoot. Automatically upgrade and patch itself while running. Instantly grow and shrink compute or storage without downtime.
  • Self-Securing to Lower Risk: Protect from external attacks and malicious internal users. Automatically apply security updates while running to protect against cyberattacks, and automatically encrypt all data.
  • Self-Repairing for Higher Availability: Provide automated protection from all planned and unplanned downtime with up to 99.995 percent availability, resulting in less than 2.5 minutes of downtime per month including planned maintenance.
 

Examples of additional autonomous capabilities being added to functional areas across the rest of the Oracle Cloud Platform include: 

Application Development

  • Automated artifact discovery, dependency management, and policy-based dependency updates increasing code quality and developer productivity
  • Automated identification and remediation of security issues throughout the CI/CD pipeline significantly reducing security risks
  • Automated code generation with single button deployment enabling rapid application development even by line of business users
 

Mobile and Bots

  • Self-learning chatbots observing interaction patterns and preferences to automate frequently performed end-user actions freeing up time for higher productivity tasks
  • Unsupervised, smart bots using machine learning to learn from user conversations enabling fluid, contextual conversations
  • Automated caching of API calls to the nearest data center in real time for lowest latency responses based on end user location
 

Application and Data Integration

  • Self-defining integrations automate business processes across different SaaS and on-premises apps
  • Self-defining data flows with automated data lake and data prep pipeline creation for ingesting data (streaming and batch)
 

Analytics

  • Automated data discovery and preparation
  • Automated analysis for key findings along with visualization and narration delivering quicker real time insights
 

Security and Management

  • Machine learning-driven user and entity behavior analytics to automatically isolate and eliminate suspicious and malicious users
  • Preventative controls to intercept data leaks across structured and unstructured data repositories
  • Unified data repository across log, performance, user experience and configuration data with applied AI/ML, eliminating need to set and manage performance and security monitoring “metadata” such as thresholds, server topology, and configuration drift
 

Oracle today also demonstrated a single Oracle Digital Assistant for users to interact across Oracle’s SaaS and PaaS services including analytics. Oracle Digital Assistant provides centralized connection for the user to converse across the user’s CRM, ERP, HCM, custom applications and business intelligence data and uses AI to intelligently correlate data and automate user behavior. Oracle Digital Assistant capabilities include:

  • Integration to speech-based devices like Amazon Echo (Alexa), Apple Siri, Google Home and Speech, Harman Kardon (Cortana), and Microsoft Cortana
  • Deep neural net based machine learning algorithms to process the message from the voice based devices to understand end user input and take action
  • Intelligent routing to the Bot with the knowledge to process the end user input
  • Deep insights into user behavior, context, preferences and routines that is used by the Oracle Digital Assistant to self-learn to recommend and automate across all data sets on behalf of the user
 

“Platform as a Service has become a critical component of the cloud delivery model to help drive business agility and innovation for enterprises,” said Saurabh Sharma, Principal Analyst, Ovum. “With Oracle revealing its AI and machine learning-based Autonomous PaaS, Oracle is bringing productivity gains, cost reduction and human error reduction to the forefront of enterprises looking for possible ways to drive faster innovation. Oracle Cloud Platform’s autonomous PaaS capabilities will let enterprises scale, back-up, upgrade, diagnose, correct and secure enterprise PaaS cloud services.”

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Greg Lunsford
Oracle
+1.650.506.6523
greg.lunsford@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Greg Lunsford

  • +1.650.506.6523

Oracle Protects Customers with Industry’s Strongest and Most Comprehensive Cloud Guarantee

Oracle Press Releases - Mon, 2018-02-12 07:00
Press Release
Oracle Protects Customers with Industry’s Strongest and Most Comprehensive Cloud Guarantee Only cloud provider offering SLAs for enterprise-grade cloud performance, manageability, and availability

Oracle CloudWorld, NEW YORK—Feb 12, 2018

Oracle today expanded the breadth and depth of its enterprise service level agreements (SLAs), delivering a superior experience for customers and helping enterprises that want to shift critical workloads to the cloud today.  Building on the strength of its recently announced, industry-leading 99.995 percent availability guarantee for Oracle Autonomous Database, Oracle today unveiled the industry’s first end-to-end financially backed cloud warranty for Infrastructure-as-a-Service (IaaS).  With these comprehensive SLAs, Oracle is now the only cloud infrastructure provider offering guaranteed service levels across performance, manageability, and availability: the three key characteristics defining how enterprises measure cloud infrastructure providers.

“No cloud provider in the world can match what Oracle guarantees,” said Thomas Kurian, President of Product Development, Oracle. “Our competitors offer narrow commitments and countless exclusions in fine print while Oracle’s SLAs deliver an industry first: guaranteed performance, manageability and availability in the cloud.”

Enterprise customers require critical business applications to perform at consistent and reliable levels.  For example, they need to know back office applications are not going to bog down as they are trying to close the quarter or doing strategic analysis. In addition, enterprises expect to have the ability to immediately add resources if they need extra compute power for urgent business intelligence or to handle spikes in demand.

Oracle’s new SLAs are an integral part of the company’s strategy to deliver the best cloud infrastructure for enterprise production workloads. These industry-first SLAs provide reassurance to customers that want to shift workloads to the cloud and require not only continual availability, but also consistent performance and the ongoing ability to manage and modify the cloud infrastructure that runs their mission-critical applications and databases. In addition, Oracle plans to extend these SLAs to all of its Oracle Cloud Platform Autonomous Services including those for database, application development, mobile and bots, application and data integration, analytics, security and management.

“Customers expect service level commitments for uptime to mean that their applications are not only available, but manageable and performing as expected, regardless of where that application may be located,” said Al Gillen, GVP, Software Development and Open Source, IDC. “Unfortunately, many cloud SLAs don’t make that broad commitment. Oracle’s revised SLAs provide customers the guarantees they need to allow them to run mission-critical enterprise applications in cloud environments with confidence.”

Under terms of the new performance SLA, Oracle Cloud Infrastructure guarantees it will deliver more than 90 percent of published performance every day in a given month.  If it falls below that level for even as few as 44 minutes a month, customers may claim service credits according to Oracle’s terms of service. By contrast, other vendors offer no guarantees around performance. Their cloud resources can be delivering 1 percent of published performance targets, and the customer must continue to pay full price for the degraded services.  As the only cloud infrastructure provider guaranteeing performance, manageability, and availability within its SLAs, Oracle provides the most protection for customers and meets the needs of enterprises who want to shift critical workloads to the cloud today.

N2N Services is onboarding production customers of Illuminate, their API management platform and cloud integration gateway, to Oracle Cloud Infrastructure. “With prior cloud providers, we had limited leverage when it came to dedicated infrastructure,” said Kiran Kodithala, chief executive officer, N2N Services. “With Oracle Cloud Infrastructure, we not only have the ability to have dedicated bare metal infrastructure, but we also have an end-to-end cloud service guarantee, ensuring we have full access and control over our cloud infrastructure services as well as consistent performance.”

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Greg Lunsford
Oracle
+1.650.506.6523
greg.lunsford@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Greg Lunsford

  • +1.650.506.6523

Oracle Cloud Growth Driving Aggressive Global Expansion

Oracle Press Releases - Mon, 2018-02-12 06:15
Press Release
Oracle Cloud Growth Driving Aggressive Global Expansion Significantly expands its current regional cloud infrastructure footprint to deliver Oracle Cloud services to an increasing number of customers and partners

Oracle CloudWorld, NEW YORK—Feb 12, 2018

Driven by strong customer demand for its cloud services worldwide, Oracle today announced plans to significantly expand its modern cloud infrastructure footprint. The company’s rapid expansion plans include the opening of 12 new datacenter regions and increase the breadth and depth of Oracle Cloud services available across Asia, Europe, and the Americas. With this news, Oracle continues its industry leadership position, delivering a complete and integrated portfolio of cloud services (SaaS, PaaS, and IaaS) and new services in security, Blockchain, and Artificial Intelligence. 

“The future of IT is autonomous. With our expanded, modern data centers, Oracle is uniquely suited to deliver the most autonomous technologies in the world,” said Oracle CEO Mark Hurd. “As we invest, our margins will continue to expand. And with our global datacenter expansion, we are able to help customers lower IT costs, mitigate risks and compete like they never have before.”

The regional expansion of Oracle’s cloud footprint will include locations in Asia including China, India, Japan, Saudi Arabia, Singapore, and South Korea; Europe including Amsterdam and Switzerland; and North America including two in Canada and two new US locations to support U.S. Department of Defense workloads.

Since launching Oracle Cloud, the company has seen an ever-growing set of customers, across the Global 2000, as well as small and medium businesses. Organizations continue to turn to Oracle at record-rates to build, deploy, and extend game-changing applications and run business-critical workloads in a low-latency, highly available, reliable and secure cloud environment. Today, customers in more than 195 countries are running their most demanding applications on Oracle Cloud Platform and Oracle Cloud Infrastructure.

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Greg Lunsford
Oracle
+1.650.506.6523
greg.lunsford@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Greg Lunsford

  • +1.650.506.6523

Oracle Cloud Infrastructure services and third-party backup tools (CloudBerry)

Syed Jaffar - Mon, 2018-02-12 02:46
As part of the Cloud Infrastructure services, Oracle offers below persistent I/O intensive block storage and high-throughput storage options, which is manageable through the console and by CLI:

  • Oracle Cloud Infrastructure Block Volumes
  • Oracle Cloud Infrastructure Object Storage
  • Oracle Cloud Infrastructure Archive Storage
  • Oracle Cloud Infrastructure File Storage
  • Oracle Cloud Infrastructure Data Transfer Service
 To understand the details, features, pricing and etc, visit the URL below:
 https://cloud.oracle.com/en_US/infrastructure/storage

One of the customers was struggling to delete the backup data from the Oracle Cloud to avoid the charges. Oracle support requested to use the CloudBerry tools for easy management. Below is the excerpt from the CloudBerry website :

"With CloudBerry Backup you can use Oracle Storage Cloud Service as a cost-effective, remote backup solution for your enterprise data and applications. By backing up your data and applications to Oracle Storage Cloud Service, you can avoid large capital and operating expenditures in acquiring and maintaining storage hardware. By automating your backup routine to run at scheduled intervals, you can further reduce the operating cost of running your backup process. In the event of a disaster at your site, the data is safe in a remote location, and you can restore it quickly to your production systems"

CloudBerry offers below Oracle tools :
  • CloudBerry Explorer
  • CloudBerry Backup
  • CloudBerry Managed Backup
  • CloudBerry Drive
 Visit their website to explore more about these tools:

https://www.cloudberrylab.com/solutions/oracle-cloud

We then used CloudBerry Backup tool (15 days free trail) to move the backup files from the Oracle cloud storage and removed it from the cloud.

You may download the 15 days free trail and give a try.




Oracle Utilities Customer To Meter V2.6.0.1.0 is now available

Anthony Shorten - Sun, 2018-02-11 21:19

With the release of Oracle Utilities Customer Care and Billing V2.6.0.1.0, the Oracle Utilities Customer To Meter product has also been updated to V2.6.0.1.0. This release is now available from Oracle Software Delivery Cloud.

The release notes available from those download sites contains a full list of new, updated and deprecated functionality available for Oracle Utilities Customer To Meter V2.6.0.1.0 and Oracle Utilities Application Framework V4.3.0.5.0. Please refer to these documents for details.

The documentation also covers upgrading from previous versions of Oracle Utilities Customer Care And Billing as well as Oracle Utilities Customer To Meter V2.6.0.0.0.

public-yum.oracle.com / yum.oracle.com now support https

Wim Coekaerts - Sun, 2018-02-11 13:06

Might have taken us a while but you can now use https in your .repo files to connect to our yum repositories.

We will transition the repo files we ship over time but we don't want to break people that have customizations.

So in the meantime, if you have repo files in /etc/yum.repos.d that point to http://yum.oracle.com or http://public-yum.oracle.com you can just do a search/replace.

Something like:

sed 's/http:\/\/public-yum.oracle.com/https:\/\/public-yum.oracle.com/g; s/http:\/\/yum.oracle.com/https:\/\/yum.oracle.com/g' public-yum-ol7.repo > public-yum-ol7.repo.new

and then replace the repo file.

Goldengate REPORTING

Michael Dinh - Sun, 2018-02-11 08:52

There were performance issues due to a new table being introduced to replication and I was asked to gather number of DMLs on table for 1 day.

Using DBA_TAB_MODIFICATIONS did not meet the requirements since statistics were gather about a week ago and over inflated.

Next thought process was why not use Goldengate since it captures all the changes and report on it.

This may or may not provide the required data if reporting is not properly configured.

Some will tell you to never rollover the report while others will rollover the report on a weekly basis.

From most recent experience, I would rollover report on daily basis because data can always be aggregated.

Here is an example of what I had to deal with.

$ grep -i report dirprm/e_hawk.prm

REPORTCOUNT EVERY 10 MINUTES, RATE

$ grep “activity since” dirtpt/E*.rpt|sort

E_HAWK0.rpt:Report at 2018-02-09 12:28:49 (activity since 2018-02-08 22:01:15)
--- How are you going to get daily activities accurately from aggregation over 2 months time frame?
E_HAWK1.rpt:Report at 2018-02-08 21:48:37 (activity since 2017-11-27 18:24:26)

Corresponding data from the report.
$ grep -A5 “From Table SCOTT.RATETEST” dirrpt/E_*rpt

E_HAWK0.rpt:Report at 2018-02-09 12:28:49 (activity since 2018-02-08 22:01:15)
dirrpt/E_HAWK0.rpt:From Table SCOTT.RATETEST:
dirrpt/E_HAWK0.rpt-       #                   inserts:       977
dirrpt/E_HAWK0.rpt-       #                   updates:  10439912
dirrpt/E_HAWK0.rpt-       #                   befores:  10439912
dirrpt/E_HAWK0.rpt-       #                   deletes:         0
dirrpt/E_HAWK0.rpt-       #                  discards:         0
--
E_HAWK1.rpt:Report at 2018-02-08 21:48:37 (activity since 2017-11-27 18:24:26)
dirrpt/E_HAWK1.rpt:From Table SCOTT.RATETEST:
dirrpt/E_HAWK1.rpt-       #                   inserts:     87063
dirrpt/E_HAWK1.rpt-       #                   updates: 821912582
dirrpt/E_HAWK1.rpt-       #                   befores: 821912582
dirrpt/E_HAWK1.rpt-       #                   deletes:         0
dirrpt/E_HAWK1.rpt-       #                  discards:         0

How did I end up getting the data?

From Goldengate report above which is better as it provides the exact data processed by Goldengate.

I was lucky! Extract was restarted when table was added and removed.

Example of Better Configuration:

global_ggenv.inc

STATOPTIONS RESETREPORTSTATS
REPORT AT 00:01
REPORTROLLOVER AT 00:01
REPORTCOUNT EVERY 60 MINUTES, RATE
DISCARDROLLOVER AT 00:01

e_hawk.prm
EXTRACT E_HAWK

EXTRACT e_hawk
EXTTRAIL ./dirdat/bb
-- CHECKPARAMS
INCLUDE ./dirprm/global_macro.inc
INCLUDE ./dirprm/global_dbenv.inc
INCLUDE ./dirprm/global_ggenv.inc
Reference:

REPORTROLLOVER

Use the REPORTROLLOVER parameter to force report files to age on a regular schedule, instead of when a process starts. 
For long or continuous runs, setting an aging schedule controls the size of the active report file and provides a more predictable set of archives that can be included in your archiving routine.
Report statistics are carried over from one report to the other. To reset the statistics in the new report, use the STATOPTIONS parameter with the RESETREPORTSTATS option.

DBA_TAB_MODIFICATIONS

INSERTS/UPDATES/DELETES - Approximate number since the last time statistics were gathered.

Docker-Machine: how to create a docker vm on a remote virtualbox server

Dietrich Schroff - Sun, 2018-02-11 01:37
After doing some first steps with docker, i wanted to test docker-swarm. Because of the limited resources of my notebook, i was looking for a Linux with a minimal footprint. In the context of setting up VMs for docker-swarm i found a log of articles about doing that with the tool docker-machine.
It sounds like this tool can create VMs just with one command. (here the documentation).

So let's give it a try:
(You have to install docker-machine first, but you do not need to install docker itself)
~$ docker-machine create --driver virtualbox test
Creating CA: /home/schroff/.docker/machine/certs/ca.pem
Creating client certificate: /home/schroff/.docker/machine/certs/cert.pem
Running pre-create checks...
(test) Image cache directory does not exist, creating it at /home/schroff/.docker/machine/cache...
(test) No default Boot2Docker ISO found locally, downloading the latest release...
(test) Latest release for github.com/boot2docker/boot2docker is v17.11.0-ce
(test) Downloading /home/schroff/.docker/machine/cache/boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v17.11.0-ce/boot2docker.iso...
(test) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
Creating machine...
(test) Copying /home/schroff/.docker/machine/cache/boot2docker.iso to /home/schroff/.docker/machine/machines/test/boot2docker.iso...
(test) Creating VirtualBox VM...
(test) Creating SSH key...
(test) Starting the VM...
(test) Check network to re-create if needed...
(test) Found a new host-only adapter: "vboxnet0"
(test) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env testWow.
After this command inside my virtualbox a new machine shows up with 1GB RAM, 20 GB HDD (dynamic allocated), 2 network adapters (1x NAT, 1x host only).




But it is not possible to create VMs on a remote Virtualbox server. The CLI does not allow to give a remote server IP:

But for some other environments it is possible to deploy VMs on a remote site:

--vmwarevsphere-vcenter: IP/hostname for vCenter (or ESXi if connecting directly to a single host)If your preferred virtualization engine supports remote servers, you can check here:

Nevertheless docker-machine is an excellent tool. If you are interested in creating a swarm, read this tutorial.
The homepage of the OS boot2docker can be found here.

APEX Alpe Adria - A New APEX Conference That You Should Attend

Joel Kallman - Sat, 2018-02-10 13:47


Have you heard of APEX Alpe Adria?  It's the latest "All APEX" conference, joining APEX World and APEX Connect as high-quality conferences dedicated to Oracle APEX developers and related technologies.

APEX Alpe Adria is a very modestly-priced one-day conference occurring on April 20, 2018 in Graz, Austria.  The list of speakers that they have organized is unbelievably impressive - they are all very knowledgeable & experienced, as well as very polished presenters.  All presentations will be in English.  Also, the evening before the conference, there will be an Ask an ACE session, where you can ask any question to a panel of highly experienced Oracle ACEs.

APEX Alpe Adria is the creation of three different Oracle partners, Dario Bilić from BiLog d.o.o., Aljaž Mali from Abakus Plus d.o.o., and Peter Raganitsch from FOEX Gmbh.  As Peter told me, they are not organizing this conference for commercial purposes.  Instead, their primary goal is to bring a high-quality dedicated APEX conference to a region where there is significant demand.  And they've developed a 100% APEX Conference - everyone and everything is driven by Oracle APEX:  the conference Web site, abstract submission, abstract voting, the back-end administration apps, everything.  The conference is already a great testament of what is possible with the Oracle Database & Oracle APEX.

While the location primarily caters to Austria, Slovenia and Croatia, the appeal stretches far beyond the immediate region.  As you can see in the map below, Graz is a short 500 miles (1.5 hour flight) from many places in central Europe.  And even though it may be a bit longer to travel for those outside the circle, I'm hoping that this conference will attract many of our other customers and partners from Eastern Europe and the Middle East.  Tickets have already been sold to conference attendees from Austria, Czech Republic, Germany, United Kingdom, Croatia, Macedonia, Russia, Slovenia, and the USA.  If you want an easy way to get connected and immersed into the global APEX community, please attend APEX Alpe Adria.  It's that simple.

Patrick Wolf & Christian Neumueller from the Oracle APEX Development team will be joining me to kick off this inaugural conference and show our support for this growing APEX community.  We hope to see you there!


Active Schema objects

Tom Kyte - Sat, 2018-02-10 04:46
How to find a list of active views in an Oracle schema? The dba_objects, all_views and other sys tables included the dropped views as well, but we would like to get active list.
Categories: DBA Blogs

Windows Server – Service not starting with ‘Error 1067: The process terminated unexpectedly’

Yann Neuhaus - Sat, 2018-02-10 02:30

Some time ago, we were installing a Migration Center (from fmeAG) on a Windows Server and at the end of the installation, the service named Migration Center Job Server is configured and finally started. Unfortunately this didn’t go well and the start command wasn’t working at all. We were using a dedicated technical account (AD account) to do the installation and to run this service. This is the error we got:

fmeAG_MC_Service_Start_Fail_Error

 

The error code 1067 means ‘ERROR_PROCESS_ABORTED’. This was the first time I saw this error, ever, so I wanted to know more about it. I performed a lot of tests on this environment to try to narrow down the issue and the following tests in particular were quite relevant:

  • Changing the Log On As user to something else
    • The service is able to start
    • It means that the issue is somehow linked to this particular technical user
  • Checking the ‘Log On As a service’ Local Policies
    • The technical user is listed and allowed to start the service so there is no issue there
  • Using the technical user to start a service on another Windows Server
    • The service is able to start
    • It means that the issue is not linked to the technical user globally

 

So with all these information, it appeared that the issue was linked to this particular technical user but only locally to this Windows Server… When working with AD accounts, it is always possible to face some issues with local profile VS domain profile (thanks Stéphane Haby for pointing that out to me, I’m not a Windows expert ;)) so I tried to work on that and after some time, I found a workaround to this issue. The workaround is simply to delete the local profile…

Since this is an AD account (that I will call <SYS_AD_ACCOUNT> below), it is possible to remove the local profile, it will just be recreated automatically when needed, you just need to remove it properly and for that, there are particular steps:

  1. Login to the Windows Server with another administrator account
  2. Log out all <SYS_AD_ACCOUNT> sessions: stop all services running with this account, all processes (from the task manager, there should be nothing on the “users” tab), aso… Alternatively, you can also disable the services (or manual mode) and then reboot the Windows Server
  3. Delete the complete folder C:\Users\<SYS_AD_ACCOUNT>. If you already tried this but did not remove the registry properly, you might end up with a C:\Users\TEMP folder… If this is the case, remove it now too.
  4. Delete the registry key matching <SYS_AD_ACCOUNT> (check the “ProfileImagePath” parameter) under “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList”
  5. Reboot the Windows Server
  6. Login to the Windows Server with the administrator account used for the step 1.
  7. Set <SYS_AD_ACCOUNT> for the “Log On As” of the target Service
  8. Start the target Service => It should now be working
  9. Login to the Windows Server with the <SYS_AD_ACCOUNT> account
  10. Open a cmd prompt => The current folder should be C:\Users\<SYS_AD_ACCOUNT> and not C:\Users\TEMP

 

I am not saying that this workaround will fix each and every issues linked to an error code 1067… Sometimes, this error will be linked to the fact that the service cannot find the executable for example. But in my case, it fixed the issue so if you are running out of options, maybe you can just give it a try ;).

 

 

Cet article Windows Server – Service not starting with ‘Error 1067: The process terminated unexpectedly’ est apparu en premier sur Blog dbi services.

Documentum – DA 7.3 showing java.lang.NullPointerException on every actions

Yann Neuhaus - Sat, 2018-02-10 01:45

Last year, we started to upgrade some Documentum Administrator from 7.2 to 7.3 and directly after, we started to see some NullPointerException on the log files. We are using DA 7.3 on some recent WebLogic Servers versions (12.1.3, 12.2.1.2). We usually deploy DA as a WAR file (so not exploded) with just the dfc.properties, keystores and logs outside of it. This is the kind of errors we started to see as soon as it was upgraded to 7.3 in the startup log file (nohup log file in our case):

java.lang.NullPointerException
        at java.io.FileInputStream.<init>(FileInputStream.java:130)
        at java.io.FileInputStream.<init>(FileInputStream.java:93)
        at com.documentum.web.form.WebformTag.fetchExtnNativeVersion(WebformTag.java:282)
        at com.documentum.web.form.WebformTag.renderExtnJavaScript(WebformTag.java:268)
        at com.documentum.web.form.WebformTag.doStartTag(WebformTag.java:159)
        at jsp_servlet._custom._jsp.__loginex._jsp__tag3(__loginex.java:1687)
        at jsp_servlet._custom._jsp.__loginex._jspService(__loginex.java:272)
        at weblogic.servlet.jsp.JspBase.service(JspBase.java:35)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.ServletStubImpl.onAddToMapException(ServletStubImpl.java:489)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:376)
        ...

 

The stack on the line 7 and 8 changed, depending on the action that was being performed. For example an access to the login page would print the following:

        at jsp_servlet._custom._jsp.__loginex._jsp__tag3(__loginex.java:1687)
        at jsp_servlet._custom._jsp.__loginex._jspService(__loginex.java:272)

 

Then once logged in, you would see each component printing the same kind of stack, with the following lines 7 and 8:

        at jsp_servlet._webtop._main.__mainex._jsp__tag0(__mainex.java:286)
        at jsp_servlet._webtop._main.__mainex._jspService(__mainex.java:116)

 

        at jsp_servlet._webtop._classic.__classic._jsp__tag0(__classic.java:408)
        at jsp_servlet._webtop._classic.__classic._jspService(__classic.java:112)

 

        at jsp_servlet._webtop._titlebar.__titlebar._jsp__tag0(__titlebar.java:436)
        at jsp_servlet._webtop._titlebar.__titlebar._jspService(__titlebar.java:175)

 

        at jsp_servlet._webtop._messagebar.__messagebar._jsp__tag0(__messagebar.java:145)
        at jsp_servlet._webtop._messagebar.__messagebar._jspService(__messagebar.java:107)

 

aso…

We were working with OpenText on this issue. As mentioned in the stack trace, this is actually because DA is trying to fetch the “ExtnNativeVersion”. This is a property file located there: wdk/extension/client/EMC/ContentXfer/com.emc.wdk.native/1. Unfortunately, when DA 7.3 is trying to locate this file, it does not work but the file is really present… It does not work because the DA is deployed as a WAR file (an archive) and therefore the path to the file is wrong. I suspect this is something that Documentum changed recently, using the getRealPath(). To change the behavior of the getRealPath function, you have to set the property “Archived Real Path Enabled” to true so it returns the canonical path of the file…

So to remove these exceptions, you have two options:

I. At the domain level:
  1. Login to the WebLogic Administration Console using your weblogic account
  2. Navigate to the correct page: DOMAIN > Configuration > Web Applications
  3. Click on the ‘Lock & Edit’ button
  4. Check the ‘Archived Real Path Enabled’ checkbox ( = set it to true)
  5. Click on the ‘Save’ and then ‘Activate Changes’ buttons

This will add the configuration to the global config.xml file so it is enabled for the whole domain. As often, I would not recommend that but rather configuring this at the application level because you might have other applications that do NOT want this setting to be set to true… So that is why you have the option number two:

II. At the application level:

The high level steps to do that would be the following ones:

  1. Extract the weblogic.xml file from the application war file
  2. Add the ‘Archived Real Path Enabled’ property in it and set it to true
  3. Repackage the war file and redeploy it

This is pretty simple:

[weblogic@weblogic_server_01 ~]$ cd $APPS_HOME
[weblogic@weblogic_server_01 apps]$ jar -xvf da.war WEB-INF/weblogic.xml
 inflated: WEB-INF/weblogic.xml
[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ vi WEB-INF/weblogic.xml
[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ tail -6 WEB-INF/weblogic.xml

   <container-descriptor>
      <show-archived-real-path-enabled>true</show-archived-real-path-enabled>
   </container-descriptor>

</weblogic-web-app>
[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ jar -uvf da.war WEB-INF/weblogic.xml
adding: WEB-INF/weblogic.xml(in = 989) (out= 398)(deflated 59%)
[weblogic@weblogic_server_01 apps]$

 

Then, you just need to update the deployment in the WebLogic Administration Console and that’s it, the exceptions should be gone now. As far as I’m aware of, these exceptions did not have any impact on the proper behavior of Documentum Administrator but it is still very ugly to have hundreds of them in the log file…

 

 

Cet article Documentum – DA 7.3 showing java.lang.NullPointerException on every actions est apparu en premier sur Blog dbi services.

WebLogic – SSO/Atn/Atz – Infinite loop

Yann Neuhaus - Sat, 2018-02-10 01:00

This blog will be the last of my series around the WebLogic SSO (how to enable logs, 403 N°1, 403 N°2) which I started some weeks ago. Several months ago on a newly built High Availability Environment (2 WebLogic Servers behind a Load Balancer), an application team was deploying their D2 Client as always. After the deployment, it appeared to the tester that the Single Sign-On was not working when using the SSO URL through the Load Balancer. For this user, the URL was going crazy on an infinite loop between the LB URL and the SAML2 Partner URL. When I tried to replicate the issue, the SSO was working fine for me…

The only possible reason for the infinite loop for some users but not for all is that there is an issue with the SSO setup/configuration on one of the two WebLogic Servers only… Indeed, the Load Balancer (with sticky session) probably redirected the tester on the WLS with the issue while my session was redirected to the other one. Enabling the debug logs (on both WLS) quickly confirmed that all communications that were going through the first WebLogic Server were working properly (with SSO) while the second WebLogic Server had this infinite loop issue.

The logs from the WebLogic Server 1 and 2 were identical up to a certain point (basically all the SAML2 part), except some elements such as the local hostname, the date/hour and some IDs (like java class instances, aso…). This is the common content on both WLS logs:

<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <SAML2Filter: Processing request on URI '/D2/X3_Portal.jsp'>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is '/D2/X3_Portal.jsp'>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is not a service URI>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): returning service type 'SPinitiator'>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: processing>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: partner id is null>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: use partner binding HTTP/POST>
<Nov 21, 2017 9:25:39 AM UTC> <Debug> <SecuritySAML2Service> <post from template url: null>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <SAML2Servlet: Processing request on URI '/saml2/sp/acs/post'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is '/saml2/sp/acs/post'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): service URI is '/sp/acs/post'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): returning service type 'ACS'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <Assertion consumer service: processing>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <get SAMLResponse from http request:PBvbnNSJ1cXMlFtZzaXM6bmHhtbG5zm46NhwOlJlOnNhbWxHbWxIb2Fc3wP6dGM6
U0FiB4bWxuczp4NTAwPSJ1cm46b2FzNTDoyLjA6YXNzZXJ0aW9uIaXM6bmFtZXM6
U0FdG9jb2wiIHhtbG5zOmRzaWc9Imh0dHA6Ly93NTDoyLjA6cHJvd3cudzMub3Jn
aHR0cDoa5vcmcvMjAwMS9W5zdGFuY2vL3d3dy53MyYTUxTY2hlbWEtUiIERlc3Rp
MWNxM2FjNzI1ZDjYmIVhNDM1Zlzc3VlSW5zdGFudD0ijhlNjc3OTkiIEMjAxNy0x
LzINpZyMiIHhtwMDAvMDkveG1sZHbG5DovL3d3dy53My5vczOmVuYz0iaHR0cmcv
...
YWRpb24+P1sOkF1ZGllbHJpY3mPHNhNlUmVzdC9zYW1sOkNvbmRpdGlvbnM+bWw6
YXNzUzpjbYW1YXNpczpuIuMlmVmPnVybjpvDphYxzp0YzpTQU1MOjGFzc2VzOlBh
OjAXh0Pj0OjUyWiI+PHNsOkF1dGhuQhxzYW129udGV4bWw6QXV0aG5Db250ZdENs
UmVnRlepBdXRobkNvbzYWmPjwvc2FtbDF1dGhuU1sOkHQ+PC93RhdGVtZW50Pjwv
c2F9zY25zZtbDpBcW1scDpSZXNwb3NlcnRpb24+PCT4=
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <BASE64 decoded saml message:<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:enc="http://www.w3.org/2001/04/xmlenc#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Destination="https://weblogic_server_01/saml2/sp/acs/post" ID="id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-" InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" IssueInstant="2017-11-21T9:25:55Z" Version="2.0"><saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer><dsig:Signature><dsig:SignedInfo><dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><dsig:Reference URI="#id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-"><dsig:Transforms><dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></dsig:Transforms><dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><dsig:DigestValue>YGtUZvsfo3z51AsBo7UDhbd6Ts=</dsig:DigestValue></dsig:Reference></dsig:SignedInfo><dsig:SignatureValue>al8sJwbqzjh1qgM3Sj0QrX1aZjwyI...JB6l4jmj91BdQrYQ7VxFzvNLczZ2brJSdLLig==</dsig:SignatureValue><dsig:KeyInfo><dsig:X509Data><dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate></dsig:X509Data></dsig:KeyInfo></dsig:Signature><samlp:Status><samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/></samlp:Status><saml:Assertion ID="id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi" IssueInstant="2017-11-21T9:25:55Z" Version="2.0"><saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer><dsig:Signature><dsig:SignedInfo><dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><dsig:Reference URI="#id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi"><dsig:Transforms><dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></dsig:Transforms><dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><dsig:DigestValue>7+jZtq8SpY3BKVaFjIFeEJm51cA=</dsig:DigestValue></dsig:Reference></dsig:SignedInfo><dsig:SignatureValue>GIlXt4B4rVFoDJRxidpZO73gXB68Dd+mcpoV9DKrjBBjLRz...zGTDcEYY2MG8FgtarZhVQGc4zxkkSg8GRT6Wng3NEuTUuA==</dsig:SignatureValue><dsig:KeyInfo><dsig:X509Data><dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate></dsig:X509Data></dsig:KeyInfo></dsig:Signature><saml:Subject><saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">PATOU_MORGAN</saml:NameID><saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><saml:SubjectConfirmationData InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" NotOnOrAfter="2017-11-21T9:30:55Z" Recipient="https://weblogic_server_01/saml2/sp/acs/post"/></saml:SubjectConfirmation></saml:Subject><saml:Conditions NotBefore="2017-11-21T9:25:55Z" NotOnOrAfter="2017-11-21T9:30:55Z"><saml:AudienceRestriction><saml:Audience>SAML2_Entity_ID_01</saml:Audience></saml:AudienceRestriction></saml:Conditions><saml:AuthnStatement AuthnInstant="2017-11-21T9:25:55Z" SessionIndex="id-oX9wXdpGmt9GQlVffvY4hEIRULEd25nrxDzE8D7w" SessionNotOnOrAfter="2017-11-21T9:40:55Z"><saml:AuthnContext><saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</saml:AuthnContextClassRef></saml:AuthnContext></saml:AuthnStatement></saml:Assertion></samlp:Response>>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <<samlp:Response> is signed.>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionServiceImpl.assertIdentity(SAML2.Assertion.DOM)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionTokenServiceImpl.assertIdentity(SAML2.Assertion.DOM)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2IdentityAsserterProvider: start assert SAML2 token>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2IdentityAsserterProvider: SAML2IdentityAsserter: tokenType is 'SAML2.Assertion.DOM'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion signature>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: The assertion is signed.>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion signature>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion attributes>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion attributes>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion issuer>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion issuer>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion conditions>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion conditions>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion subject>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion subject>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2NameMapperCache.getNameMapper: Found name mapper in the cache>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert.processAttributes - processAttrs: false, processGrpAttrs: true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <DefaultSAML2NameMapperImpl: mapName: Mapped name: qualifier: null, name: PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAMLIACallbackHandler: SAMLIACallbackHandler(true, PATOU_MORGAN, null)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionCallbackServiceImpl.assertIdentity>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionCallbackServiceImpl.assertIdentity>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAMLIACallbackHandler: callback[0]: NameCallback: setName(PATOU_MORGAN)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionCallbackServiceImpl.assertIdentity returning PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityCacheServiceImpl.getCachedIdentity(idm:_def_Idmusr:PATOU_MORGAN)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityCacheServiceImpl.getCachedIdentity(idm:_def_Idmusr:PATOU_MORGAN) returning null>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionCallbackServiceImpl.assertIdentity did not find a cached identity.>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.constructor>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl.getJAASIdentityAssertionConfigurationName()>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <weblogic.security.service.internal.WLSJAASLoginServiceImpl$ServiceImpl.authenticate>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login ClassLoader=sun.misc.Launcher$AppClassLoader@7d4991ad>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login ThreadContext ClassLoader Original=com.bea.common.security.utils.SAML2ClassLoader@625ecb90>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl.getAppConfigurationEntry(com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl0)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl.getAppConfigurationEntry>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.legacy.service.JAASIdentityAssertionProviderImpl$V1Wrapper.getAssertionModuleConfiguration>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl$JAASProviderImpl.getProviderAppConfigurationEntry returning LoginModuleClassName=weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl, ControlFlag=LoginModuleControlFlag: sufficient>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.legacy.service.JAASIdentityAssertionProviderImpl$V1Wrapper.getClassLoader>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap LoginModuleClassName=weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap ClassLoader=java.net.URLClassLoader@108c7fe5>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap ControlFlag=LoginModuleControlFlag: sufficient>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.legacy.service.JAASIdentityAssertionProviderImpl$V1Wrapper.getAssertionModuleConfiguration>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASIdentityAssertionConfigurationServiceImpl$JAASProviderImpl.getProviderAppConfigurationEntry returning LoginModuleClassName=weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl, ControlFlag=LoginModuleControlFlag: required>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.legacy.service.JAASIdentityAssertionProviderImpl$V1Wrapper.getClassLoader>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap LoginModuleClassName=weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap ClassLoader=java.net.URLClassLoader@108c7fe5>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.wrap ControlFlag=LoginModuleControlFlag: required>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login created LoginContext>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login ThreadContext ClassLoader Current=com.bea.common.security.utils.SAML2ClassLoader@625ecb90>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.initialize LoginModuleClassName=weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.initialize ClassLoader=java.net.URLClassLoader@108c7fe5>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.initialize created delegate login module>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP ATN LoginModule initialized>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.initialize delegated>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.login>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP Atn Login>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle callbcacks[0] will be delegated>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle callbcacks[0] will use NameCallback to retrieve name>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle will delegate all callbacks>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Atn> <SAMLIACallbackHandler: callback[0]: NameCallback: setName(PATOU_MORGAN)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle delegated callbacks>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.CallbackHandlerWrapper.handle got username from callbacks[0], UserName=PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP Atn Login username: PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <getUserDNName? user:PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <getDNForUser search("ou=people,ou=internal,dc=company,dc=com", "(&(uid=PATOU_MORGAN)(objectclass=companyperson))", base DN & below)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <getConnection return conn:LDAPConnection {ldaps://ldap.company.com:3636 ldapVersion:3 bindDN:"ou=bind_user,ou=applications,ou=internal,dc=company,dc=com"}>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <ConnSetupMgr><Connecting to host=ldap.company.com, ssl port=3636>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <ConnSetupMgr><Successfully connected to host=ldap.company.com, ssl port=3636>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Retrieved guid:c6e6f021-2b29d761-8284c0aa-df200d0f>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <DN for user PATOU_MORGAN: uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <found user from ldap, user:PATOU_MORGAN, userDN=uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Retrieved username from LDAP :PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <returnConnection conn:LDAPConnection {ldaps://ldap.company.com:3636 ldapVersion:3 bindDN:"ou=bind_user,ou=applications,ou=internal,dc=company,dc=com"}>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP Atn Asserted Identity for PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <List groups that member: PATOU_MORGAN belongs to>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <DN for user PATOU_MORGAN: uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Retrieved dn:uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com for user:PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <search("ou=bind_user,ou=applications,ou=internal,dc=company,dc=com", "(&(uniquemember=uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com)(objectclass=groupofuniquenames))", base DN & below)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <getConnection return conn:LDAPConnection {ldaps://ldap.company.com:3636 ldapVersion:3 bindDN:"ou=bind_user,ou=applications,ou=internal,dc=company,dc=com"}>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Result has more elements: false>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <search("ou=bind_user,ou=applications,ou=internal,dc=company,dc=com", "(objectclass=groupofURLs)", base DN & below)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <returnConnection conn:LDAPConnection {ldaps://ldap.company.com:3636 ldapVersion:3 bindDN:"ou=bind_user,ou=applications,ou=internal,dc=company,dc=com"}>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <DN for user PATOU_MORGAN: uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Retrieved dn:uid=PATOU_MORGAN,ou=people,ou=internal,dc=company,dc=com for user:PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <login succeeded for username PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.login delegated, returning true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.commit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP Atn Commit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <LDAP Atn Principals Added>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.LoginModuleWrapper.commit delegated, returning true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login logged in>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login subject=Subject:
        Principal: PATOU_MORGAN
        Private Credential: PATOU_MORGAN
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <weblogic.security.service.internal.WLSIdentityServiceImpl.getIdentityFromSubject Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principals)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) Principal=PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) PrincipalClassName=weblogic.security.principal.WLSUserImpl>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) trying PrincipalValidator for interface weblogic.security.principal.WLSPrincipal>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) PrincipalValidator handles this PrincipalClass>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Generated signature and signed WLS principal PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) PrincipalValidator signed the principal>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.sign(Principal) All required PrincipalValidators signed this PrincipalClass, returning true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.JAASLoginServiceImpl.login identity=Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <weblogic.security.service.internal.WLSJAASLoginServiceImpl$ServiceImpl.authenticate authenticate succeeded for user PATOU_MORGAN, Identity=Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <weblogic.security.service.internal.WLSJAASLoginServiceImpl$ServiceImpl.authenticate login succeeded and PATOU_MORGAN was not previously locked out>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityCacheServiceImpl.cachedIdentity(Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
),cacheKe y= idm:_def_Idmusr:PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <Using redirect URL from request cache: 'https://d2.company.com:443/D2/X3_Portal.jsp'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <Redirecting to URL: https://d2.company.com:443/D2/X3_Portal.jsp>

 

At this point, the SAML2 exchange is successful and the user has been found in the LDAP successfully. So the only thing remaining is the validation of the user on WebLogic side as well as the access to the Application (D2 in this case). That’s where the logs of the WLS started to diverge.

On the WebLogic Server 1 (no issue), we can see WebLogic validating the user and finally granting access to the application:

<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityCacheServiceImpl.cachedIdentity(Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
),cacheKe y= idm:_def_Idmusr:PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <Using redirect URL from request cache: 'https://d2.company.com:443/D2/X3_Portal.jsp'>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecuritySAML2Service> <Redirecting to URL: https://d2.company.com:443/D2/X3_Portal.jsp>

<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <PrincipalAuthenticator.validateIdentity>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <PrincipalAuthenticator.validateIdentity will use common security service>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principals)>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) Principal=PATOU_MORGAN>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) PrincipalClassName=weblogic.security.principal.WLSUserImpl>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) trying PrincipalValidator for interface weblogic.security.principal.WLSPrincipal>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) PrincipalValidator handles this PrincipalClass>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <Validate WLS principal PATOU_MORGAN returns true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) PrincipalValidator said the principal is valid>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principal) One or more PrincipalValidators handled this PrincipalClass, returning true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.PrincipalValidationServiceImpl.validate(Principals) validated all principals>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" "consumer" ]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> < Subject: 1
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> < Roles:Anonymous, consumer>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> < Resource: type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> < Direction: ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> < Context Handler: >
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous,consumer]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous,consumer]) -> true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" "consumer" ]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/x3_portal/x3_portal.nocache.js, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Subject: 1
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Roles:Anonymous, consumer>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Resource: type=<url>, application=D2, contextPath=/D2, uri=/x3_portal/x3_portal.nocache.js, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Direction: ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Context Handler: >
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous,consumer]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous,consumer]) -> true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" "consumer" ]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/x3_portal/79E87564DA9D026D7ADA9326E414D4FC.cache.js, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Subject: 1
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Roles:Anonymous, consumer>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Resource: type=<url>, application=D2, contextPath=/D2, uri=/x3_portal/79E87564DA9D026D7ADA9326E414D4FC.cache.js, httpMethod=GET>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Direction: ONCE>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <        Context Handler: >
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous,consumer]>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous,consumer]) -> true>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Permit>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned PERMIT>
<Nov 21, 2017 9:25:55 AM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: true>

 

On the other hand, the WebLogic Server 2 (with issue) didn’t even start to validate the user, it just started again the full SAML2 process:

<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityCacheServiceImpl.cachedIdentity(Subject: 1
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
),cacheKe y= idm:_def_Idmusr:PATOU_MORGAN>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <Using redirect URL from request cache: 'https://d2.company.com:443/D2/X3_Portal.jsp'>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <Redirecting to URL: https://d2.company.com:443/D2/X3_Portal.jsp>

<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <SAML2Filter: Processing request on URI '/D2/X3_Portal.jsp'>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is '/D2/X3_Portal.jsp'>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is not a service URI>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): returning service type 'SPinitiator'>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: processing>
<Nov 21, 2017 10:34:53 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: partner id is null>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <SP initiating authn request: use partner binding HTTP/POST>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <post from template url: null>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <SAML2Servlet: Processing request on URI '/saml2/sp/acs/post'>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is '/saml2/sp/acs/post'>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): service URI is '/sp/acs/post'>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): returning service type 'ACS'>
<Nov 21, 2017 10:34:54 AM UTC> <Debug> <SecuritySAML2Service> <Assertion consumer service: processing>

 

This WebLogic Server 2 is acting just like if it was a completely new request and not an existing one… And this is the reaction I had when I saw this log so I checked the session identifier!

Whenever you configure an Application to support SAML2 SSO, then the session identifier must be the default one. If you set something else, then you will have this infinite loop of authentication because WebLogic will consider a successful authentication as a non-authenticated one. This is described in the Oracle documentation.

So to solve this issue, that’s pretty simple… Either comment the session identifier or set it to the default value (below I’m just resetting it to the default value):

[weblogic@weblogic_server_01 ~]$ cd $APPS_HOME
[weblogic@weblogic_server_01 apps]$ jar -xvf D2.war WEB-INF/weblogic.xml
 inflated: WEB-INF/weblogic.xml
[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ sed -i 's,<cookie-name>[^<]*<,<cookie-name>JSESSIONID<,' WEB-INF/weblogic.xml
[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ grep -A3 -B2 cookie-name WEB-INF/weblogic.xml

  <session-descriptor>
    <cookie-name>JSESSIONID</cookie-name>
    <cookie-http-only>false</cookie-http-only>
  </session-descriptor>

[weblogic@weblogic_server_01 apps]$
[weblogic@weblogic_server_01 apps]$ jar -uvf D2.war WEB-INF/weblogic.xml
adding: WEB-INF/weblogic.xml(in = 901) (out= 432)(deflated 52%)
[weblogic@weblogic_server_01 apps]$

 

Then you obviously need to redeploy your application and it will be working. :)

 

 

Cet article WebLogic – SSO/Atn/Atz – Infinite loop est apparu en premier sur Blog dbi services.

Server process name in Postgres and Oracle

Yann Neuhaus - Fri, 2018-02-09 16:01

Every database analysis should start with system load analysis. If the host is in CPU starvation, then looking at other statistics can be pointless. With ‘top’ on Linux, or equivalent such as process explorer on Windows, you see the process (and threads). If the name of the process is meaningful, you already have a clue about the active sessions. Postgres goes further by showing the operation (which SQL command), the state (running or waiting), and the identification of the client.

Postgres

By default ‘top’ displays the program name (like ‘comm’ in /proc or in ‘ps’ format), which will be ‘postgres’ for all PostgreSQL processes. But you can also display the command line with ‘c’ in interactive mode, or directly starting with ‘top -c’, which is the same as the /proc/$pid/cmdline or ‘cmd’ or ‘args’ in ‘ps’ format.


top -c
 
Tasks: 263 total, 13 running, 250 sleeping, 0 stopped, 0 zombie
%Cpu(s): 24.4 us, 5.0 sy, 0.0 ni, 68.5 id, 0.9 wa, 0.0 hi, 1.2 si, 0.0 st
KiB Mem : 4044424 total, 558000 free, 2731380 used, 755044 buff/cache
KiB Swap: 421884 total, 418904 free, 2980 used. 2107088 avail Mem
 
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20347 postgres 20 0 394760 11660 8696 S 7.6 0.3 0:00.49 postgres: demo demo 192.168.56.125(37664) DELETE
20365 postgres 20 0 393816 11448 8736 S 6.9 0.3 0:00.37 postgres: demo demo 192.168.56.125(37669) idle
20346 postgres 20 0 393800 11440 8736 S 6.6 0.3 0:00.37 postgres: demo demo 192.168.56.125(37663) UPDATE
20356 postgres 20 0 396056 12480 8736 S 6.6 0.3 0:00.42 postgres: demo demo 192.168.56.125(37667) INSERT
20357 postgres 20 0 393768 11396 8736 S 6.6 0.3 0:00.40 postgres: demo demo 192.168.56.125(37668) DELETE waiting
20366 postgres 20 0 394728 11652 8736 S 6.6 0.3 0:00.35 postgres: demo demo 192.168.56.125(37670) UPDATE
20387 postgres 20 0 394088 11420 8720 S 6.6 0.3 0:00.41 postgres: demo demo 192.168.56.125(37676) UPDATE
20336 postgres 20 0 395032 12436 8736 S 6.3 0.3 0:00.37 postgres: demo demo 192.168.56.125(37661) UPDATE
20320 postgres 20 0 395032 12468 8736 R 5.9 0.3 0:00.33 postgres: demo demo 192.168.56.125(37658) DROP TABLE
20348 postgres 20 0 395016 12360 8736 R 5.9 0.3 0:00.33 postgres: demo demo 192.168.56.125(37665) VACUUM
20371 postgres 20 0 396008 12708 8736 R 5.9 0.3 0:00.40 postgres: demo demo 192.168.56.125(37673) INSERT
20321 postgres 20 0 396040 12516 8736 D 5.6 0.3 0:00.31 postgres: demo demo 192.168.56.125(37659) INSERT
20333 postgres 20 0 395016 11920 8700 R 5.6 0.3 0:00.36 postgres: demo demo 192.168.56.125(37660) UPDATE
20368 postgres 20 0 393768 11396 8736 R 5.6 0.3 0:00.43 postgres: demo demo 192.168.56.125(37671) UPDATE
20372 postgres 20 0 393768 11396 8736 R 5.6 0.3 0:00.36 postgres: demo demo 192.168.56.125(37674) INSERT
20340 postgres 20 0 394728 11700 8736 S 5.3 0.3 0:00.40 postgres: demo demo 192.168.56.125(37662) idle
20355 postgres 20 0 394120 11628 8672 S 5.3 0.3 0:00.32 postgres: demo demo 192.168.56.125(37666) DELETE waiting
20389 postgres 20 0 395016 12196 8724 R 5.3 0.3 0:00.37 postgres: demo demo 192.168.56.125(37677) UPDATE
20370 postgres 20 0 393768 11392 8736 S 4.6 0.3 0:00.34 postgres: demo demo 192.168.56.125(37672) DELETE
20376 postgres 20 0 393816 11436 8736 S 4.6 0.3 0:00.37 postgres: demo demo 192.168.56.125(37675) DELETE waiting
20243 postgres 20 0 392364 5124 3696 S 1.0 0.1 0:00.06 postgres: wal writer process

This is very useful information. Postgres changes the process title when it executes a statement. In this example:

  • ‘postgres:’ is the name of the process
  • ‘demo demo’ are the database name and the user name
  • ‘192.168.56.125(37664)’ are the IP address and port of the client.
  • DELETE, UPDATE… are the commands. They are more or less the command name used in the feed back after the command completion
  • ‘idle’ is for sessions not currently running a statement
  • ‘waiting’ is added when the session is waiting on a blocker session (enqueued on a lock for example)
  • ‘wal writer process’ is a background process

This is very useful information, especially because we have, on the same sampling, the Postgres session state (idle, waiting or running an operation) with the Linux process state (S when sleeping, R when runnable or running, D when in I/O,… ).

Oracle

With Oracle, you can have ASH to sample session state, but being able to see it at OS level would be great. It would also be a safeguard if we need to kill a process.

But, the Oracle processes do not change while running. They are set at connection time.

The background processes mention the Oracle process name and the Instance name:

[oracle@VM122 ~]$ ps -u oracle -o pid,comm,cmd,args | head
 
PID COMMAND CMD COMMAND
1873 ora_pmon_cdb2 ora_pmon_CDB2 ora_pmon_CDB2
1875 ora_clmn_cdb2 ora_clmn_CDB2 ora_clmn_CDB2
1877 ora_psp0_cdb2 ora_psp0_CDB2 ora_psp0_CDB2
1880 ora_vktm_cdb2 ora_vktm_CDB2 ora_vktm_CDB2
1884 ora_gen0_cdb2 ora_gen0_CDB2 ora_gen0_CDB2

The foreground processes mention the instance and the connection type, LOCAL=YES for bequeath, LOCAL=NO for remote via listener.


[oracle@VM122 ~]$ ps -u oracle -o pid,comm,cmd,args | grep -E "[ ]oracle_|[ ]PID"
 
PID COMMAND CMD COMMAND
21429 oracle_21429_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21431 oracle_21431_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21451 oracle_21451_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21517 oracle_21517_cd oracleCDB1 (LOCAL=NO) oracleCDB1 (LOCAL=NO)

You need to join V$PROCESS with V$SESSION on (V$PROCESS.ADDR=V$SESSION.PADDR) to find the state, operation and client information

For the fun, you can change the program name (ARGV0) and arguments (ARGS).

The local connections can change the name in the BEQueath connection string:


sqlplus -s system/oracle@"(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=$ORACLE_HOME/bin/oracle)(ARGV0=postgres)(ARGS='(DESCRIPTION=(LOCAL=MAYBE)(ADDRESS=(PROTOCOL=BEQ)))')(ENVS='OLE_HOME=$ORACLE_HOME,ORACLE_SID=CDB1'))" <<< "host ps -u oracle -o pid,comm,cmd,args | grep -E '[ ]oracle_|[ ]PID'"
 
PID COMMAND CMD COMMAND
21155 oracle_21155_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21176 oracle_21176_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21429 oracle_21429_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21431 oracle_21431_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21451 oracle_21451_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21517 oracle_21517_cd oracleCDB1 (LOCAL=NO) oracleCDB1 (LOCAL=NO)
22593 oracle_22593_cd postgres (DESCRIPTION=(LOCA postgres (DESCRIPTION=(LOCAL=MAYBE)(ADDRESS=(PROTOCOL=BEQ)))

The remote connection can have the name changed from the static registration, adding an ARVG0 value on the listener side:


LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
SID_LIST_LISTENER=(SID_LIST=
(SID_DESC=(GLOBAL_DBNAME=MYAPP)(ARGV0=myapp)(SID_NAME=CDB1)(ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1))
(SID_DESC=(GLOBAL_DBNAME=CDB1_DGMGRL)(SID_NAME=CDB1)(ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1))
(SID_DESC=(GLOBAL_DBNAME=CDB2_DGMGRL)(SID_NAME=CDB2)(ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1))
)

When reloading the listener with this (ARGV0=myapp) to identify connection from this MYAPP service

[oracle@VM122 ~]$ sqlplus -s system/oracle@//localhost/MYAPP <<< "host ps -u oracle -o pid,comm,cmd,args | grep -E '[ ]oracle_|[ ]PID'"
PID COMMAND CMD COMMAND
21155 oracle_21155_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21176 oracle_21176_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21429 oracle_21429_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21431 oracle_21431_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21451 oracle_21451_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21517 oracle_21517_cd oracleCDB1 (LOCAL=NO) oracleCDB1 (LOCAL=NO)
24261 oracle_24261_cd myapp (LOCAL=NO) myapp (LOCAL=NO)

However, I would not recommend to change the default. This can be very confusing for people expecting ora_xxxx_SID and oracleSID process names.

 

Cet article Server process name in Postgres and Oracle est apparu en premier sur Blog dbi services.

The Transition (or: How I Learned to Stop Worrying and Love PostgreSQL)

Don Seiler - Fri, 2018-02-09 14:31
Note: Yes I re-(ab)used the title.



As some of you may know (if you follow me on twitter), after 16 years as an Oracle DBA, I made a career shift last summer. I hung up the Oracle spurs (and that sweet Oracle Ace vest) and threw on a pair of PostgreSQL chaps. I had always been a fan and very casual user of PostgreSQL for many years, even gently lobbying the folks at Pythian to add PostgreSQL DBA services to their offering (to no avail).

I'm taking this fresh start to also jump-start my blogging, hoping to write about interesting notes as I dive deeper in the PostgreSQL world (and even some Hadoop as I get into that). I'm just over 6 months into my new adventure and loving it. I'm excited to be dealing a lot more with an open source community, and interacting daily with some contributors via Slack.

Over the past 6 months, I've compiled a list of topics to write about. If I can remember the details or find my notes, I'll hopefully be writing regularly on these topics!

So, I hope to see more of you regularly here. Stay tuned!
Categories: DBA Blogs

Collaborate 2018 Workshops

Jim Marion - Fri, 2018-02-09 13:16

I will be delivering two workshops at Collaborate 2018:

  • Creating Fluid Pages // Sunday, April 22, 2018 // 12:30 PM - 4:00 PM // Through hands-on activities, students will gain experience and confidence building Fluid pages. Activities include building a Fluid page using the standard and two-column layouts, using delivered class names for layout, using PeopleCode to initialize the layout, and much more. Because this is a hands-on training class, bring your laptop to participate in exercises. This course is designed for Functional Business Analysts as well as Developers.
  • Using Fluid to Build Better-than-breadcrumb Navigation // Thursday, April 26, 2018 // 8:30 AM - 12:00 PM // The most common reason organizations retain Classic instead of migrating to Fluid is navigation. Users love their Classic breadcrumbs and do not appreciate the Navbar. Through hands-on activities, students will learn how to build next-generation navigation that rivals breadcrumbs. This is a hands-on training session, so bring your laptop to participate in exercises. This course is for subject matter experts, functional business analysts, developers, and administrators.

These are hands-on workshops. Please be sure to add these workshops to your registration when you register for Collaborate. If you are already registered, you may want to revisit your registration to add these workshops. A full list of workshops and workshop details is available on the Quest PeopleSoft Workshop page.

I look forward to seeing you in April!

Pages

Subscribe to Oracle FAQ aggregator