Feed aggregator

oracle lsitener config

Tom Kyte - Wed, 2016-07-13 11:46
I am trying to understand how the oracle listener gets its config info. I see this when I run lsnrctl status: Listener Parameter File /home/oracle/u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora Listener Log File /home/oracle/u0...
Categories: DBA Blogs

Archiving log buffer directly to archive logs

Tom Kyte - Wed, 2016-07-13 11:46
Is it possible to archive contents of log buffer directly to archive logs,instead of writing into redo logs. Will this decrease the load in system I/O. Please explain the reason if it is not possible.
Categories: DBA Blogs

Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery

Rittman Mead Consulting - Wed, 2016-07-13 09:02
Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery

Big Data Discovery (BDD) is a great tool for exploring, transforming, and visualising data stored in your organisation’s Data Reservoir. I presented a workshop on it at a recent conference, and got an interesting question from the audience that I thought I’d explore further here. Currently the primary route for getting data into BDD requires that it be (i) in HDFS and (ii) have a Hive table defined on top of it. From there, BDD automagically ingests the Hive table, or the dataprocessingCLI is manually called which prompts the BDD DGraph engine to go and sample (or read in full) the Hive dataset.

This is great, and works well where the dataset is vast (this is Big Data, after all) and needs the sampling that DGraph provides. It’s also simple enough for Hive tables that have already been defined, perhaps by another team. But - and this was the gist of the question that I got - what about where the Hive table doesn’t exist already? Because if it doesn’t, we now need to declare all the columns as well as choose the all-important SerDe in order to read the data.

SerDes are brilliant, in that they enable the application of a schema-on-read to data in many forms, but at the very early stages of a data project there are probably going to be lots of formats of data (such as TSV, CSV, JSON, as well as log files and so on) from varying sources. Choosing the relevant SerDe for each one, and making sure that BDD is also configured with the necessary jar, as well as manually listing each column to be defined in the table, adds overhead to the project. Wouldn’t it be nice if we could side-step this step somehow? In this article we’ll see how!

Importing Datasets through BDD Studio

Before we get into more fancy options, don’t forget that BDD itself offers the facility to upload CSV, TSV, and XLSX files, as well as connect to JDBC datasources. Data imported this way will be stored by BDD in a Hive table and ingested to DGraph.

Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery

This is great for smaller files held locally. But what about files on your BDD cluster, that are too large to upload from local machine, or in other formats - such as JSON?

Loading a CSV file

As we’ve just seen, CSV files can be imported to Hive/BDD directly through the GUI. But perhaps you’ve got a large CSV file sat local to BDD that you want to import? Or a folder full of varying CSV files that would be too time-consuming to upload through the GUI one-by-one?

For this we can use BDD Shell with the Python Pandas library, and I’m going to do so here through the excellent Jupyter Notebooks interface. You can read more about these here and details of how to configure them on BigDataLite 4.5 here. The great thing about notebooks, whether Jupyter or Zeppelin, is that I don’t need to write any more blog text here - I can simply embed the notebook inline and it is self-documenting:

https://gist.github.com/76b477f69303dd8a9d8ee460a341c445

(gist link)

Note that at end of this we call dataprocessingCLI to automatically bring the new table into BDD’s DGraph engine for use in BDD Studio. If you’ve got BDD configured to automagically add new Hive tables, or you don’t want to run this step, you can just comment it out.

Loading simple JSON data

Whilst CSV files are tabular by definition, JSON records can contain nested objects (recursively), as well as arrays. Let’s look at an example of using SparkSQL to import a simple flat JSON file, before then considering how we handle nested and array formats. Note that SparkSQL can read datasets from both local (file://) storage as well as HDFS (hdfs://):

https://gist.github.com/8b7118c230f34f7d57bd9b0aa4e0c34c

(gist link)

Once loaded into Hive, it can be viewed in Hue:

Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery Loading nested JSON data

What’s been great so far, whether loading CSV, XLS, or simple JSON, is that we’ve not had to list out column names. All that needs modifying in the scripts above to import a different file with a different set of columns is to change the filename and the target tablename. Now we’re going to look at an example of a JSON file with nested objects - which is very common in JSON - and we’re going to have to roll our sleeves up a tad and start hardcoding some schema details.

First up, we import the JSON to a SparkSQL dataframe as before (although this time I’m loading it from HDFS, but local works too):

df = sqlContext.read.json('hdfs:///user/oracle/incoming/twitter/2016/07/12/')  

Then I declare this as a temporary table, which enables me to subsequently run queries with SQL against it

df.registerTempTable("twitter")  

A very simple example of a SQL query would be to look at the record count:

result_df = sqlContext.sql("select count(*) from twitter")  
result_df.show()

+----+  
| _c0|  
+----+  
|3011|  
+----+

The result of a sqlContext.sql invocation is a dataframe, which above I’m assigning to a new variable, but I could as easily run:

sqlContext.sql("select count(*) from twitter").show()  

for the same result.

The sqlContext has inferred the JSON schema automagically, and we can inspect it using

df.printSchema()  

The twitter schema is huge, so I’m just quoting a few choice sections of it here to illustrate subsequent points:

root  
|-- created_at: string (nullable = true)  
|-- entities: struct (nullable = true)  
|    |-- hashtags: array (nullable = true)  
|    |    |-- element: struct (containsNull = true)  
|    |    |    |-- indices: array (nullable = true)  
|    |    |    |    |-- element: long (containsNull = true)  
|    |    |    |-- text: string (nullable = true)  
|    |-- user_mentions: array (nullable = true)  
|    |    |-- element: struct (containsNull = true)  
|    |    |    |-- id: long (nullable = true)  
|    |    |    |-- id_str: string (nullable = true)  
|    |    |    |-- indices: array (nullable = true)  
|    |    |    |    |-- element: long (containsNull = true)  
|    |    |    |-- name: string (nullable = true)  
|    |    |    |-- screen_name: string (nullable = true)  
|-- source: string (nullable = true)  
|-- text: string (nullable = true)  
|-- timestamp_ms: string (nullable = true)  
|-- truncated: boolean (nullable = true)  
|-- user: struct (nullable = true)  
|    |-- followers_count: long (nullable = true)  
|    |-- following: string (nullable = true)  
|    |-- friends_count: long (nullable = true)  
|    |-- name: string (nullable = true)  
|    |-- screen_name: string (nullable = true)

Points to note about the schema:

  • In the root of the schema we have attributes such as text and created_at
  • There are nested elements (“struct”) such as user and within it screen_name, followers_count etc
  • There’s also array objects, where an attribute can occur more than one, such as hashtags, and user_mentions.

Accessing root and nested attributes is easy - we just use dot notation:

sqlContext.sql("SELECT created_at, user.screen_name, text FROM twitter").show()

+--------------------+--------------+--------------------+  
|          created_at|   screen_name|                text|  
+--------------------+--------------+--------------------+  
|Tue Jul 12 16:13:...|  Snehalstocks|"Students need to...|  
|Tue Jul 12 16:13:...|   KingMarkT93|Ga caya :( https:...|

We can save this as a dataframe that’s then persisted to Hive, for ingest into BDD:

subset02 = sqlContext.sql("SELECT created_at, user.screen_name, text FROM twitter")  
tablename = 'twitter_user_text'  
qualified_tablename='default.' + tablename  
subset02.write.mode('Overwrite').saveAsTable(qualified_tablename)  

Which in Hue looks like this:

Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery

Attributes in an array are a bit more tricky. Here’s an example tweet with multiple user_mentions and a hashtag too:

https://twitter.com/flederbine/status/752940179569115136

Here we use the LATERAL VIEW syntax, with the optional OUTER operator since not all tweets have these additional entities, and we want to make sure we show all tweets including those that don’t have these entities. Here’s the SQL formatted for reading:

SELECT id,  
created_at,  
user.screen_name,  
text as tweet_text,  
hashtag.text as hashtag,  
user_mentions.screen_name as mentioned_user  
from twitter  
LATERAL VIEW OUTER explode(entities.user_mentions) user_mentionsTable as user_mentions  
LATERAL VIEW OUTER explode(entities.hashtags) hashtagsTable AS hashtag  

Which when run as from sqlContext.sql() gives us:

+------------------+--------------------+---------------+--------------------+-------+---------------+  
|                id|          created_at|    screen_name|          tweet_text|hashtag|    screen_name|  
+------------------+--------------------+---------------+--------------------+-------+---------------+  
|752940179569115136|Tue Jul 12 18:58:...|     flederbine|@johnnyq72 @orcld...|ImALLin|      johnnyq72|  
|752940179569115136|Tue Jul 12 18:58:...|     flederbine|@johnnyq72 @orcld...|ImALLin|       orcldoug|  
|752940179569115136|Tue Jul 12 18:58:...|     flederbine|@johnnyq72 @orcld...|ImALLin|          rmoff|  
|752940179569115136|Tue Jul 12 18:58:...|     flederbine|@johnnyq72 @orcld...|ImALLin|    markrittman|  
|752940179569115136|Tue Jul 12 18:58:...|     flederbine|@johnnyq72 @orcld...|ImALLin|     mikedurran|  
+------------------+--------------------+---------------+--------------------+-------+---------------+

and written back to Hive for ingest to BDD:

Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery

You can use these SQL queries both for simply flattening JSON, as above, or for building summary tables, such as this one showing the most common hashtags in the dataset:

sqlContext.sql("SELECT hashtag.text,count(*) as inst_count from twitter  LATERAL VIEW OUTER explode(entities.hashtags) hashtagsTable AS hashtag GROUP BY hashtag.text order by inst_count desc").show(4)

+-----------+----------+  
|       text|inst_count|  
+-----------+----------+  
|     Hadoop|       165|  
|     Oracle|       151|  
|        job|       128|  
|    BigData|       112|

You can find the full Jupyter Notebook with all these nested/array JSON examples here:

https://gist.github.com/a38e853d3a7dcb48a9df99ce1e3505ff

(gist link)

You may decide after looking at this that you’d rather just go back to Hive and SerDes, and as is frequently the case in ‘data wrangling’ there’s multiple ways to achieve the same end. The route you take comes down to personal preference and familiarity with the toolsets. In this particular case I'd still go for SparkSQL for the initial exploration as it's quicker to 'poke around' the dataset than with defining and re-defining Hive tables -- YMMV. A final point to consider before we dig in is that SparkSQL importing JSON and saving back to HDFS/Hive is a static process, and if your underlying data is changing (e.g. streaming to HDFS from Flume) then you would probably want a Hive table over the HDFS file so that it is live when queried.

Loading an Excel workbook with many sheets

This was the use-case that led me to researching programmatic import of datasets in the first place. I was doing some work with a dataset of road traffic accident data, which included a single XLS file with over 30 sheets, each a lookup table for a separate set of dimension attributes. Importing each sheet one by one through the BDD GUI was tedious, and being a lazy geek, I looked to automate it.

Using Pandas read_excel function and a smidge of Python to loop through each sheet it was easily done. You can see the full notebook here:

https://gist.github.com/rmoff/3fa5d857df8ca5895356c22e420f3b22

(gist link)

Categories: BI & Warehousing

Consuming MCS Custom API using SDK for Android

Recently experimenting with the latest Oracle Mobile Cloud Service SDK for Android I was pleasantly surprised how easy you can call MCS Custom API using SDK with just a few lines of code. In common...

We share our skills to maximize your revenue!
Categories: DBA Blogs

VirtualBox 5.1

Tim Hall - Wed, 2016-07-13 07:16

VirtualBox 5.1 has been released!

Downloads and changelog in the usual places.

I’ve only done the installation on my Windows 7 desktop so far. It seems to be working fine at the moment. I’ll do the Mac and Linux ones when I get home tonight.

Cheers

Tim…

Update: Worked fine on “El Crapitan” too. Still haven’t tried a Linux host.

VirtualBox 5.1 was first posted on July 13, 2016 at 1:16 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

GoldenGate 12.2 DDLOPTIONS MAPSESSIONSCHEMA

Michael Dinh - Tue, 2016-07-12 23:36

This is probably not a 12.2 specific features and since I only tested for 12.2, I am not going to vouch for other versions.

Be careful when qualifying schema with DDL replication especially when the source and target schemas are not the same.

Example: Source schema is DEMO and Target schema is DEMO_REP

What happens to DDL replication when the following is used?

alter table demo.t add constraint t_pk primary key(id) using index demo.t_pk_ix;

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ sqlplus demo/demo @t.sql

SQL*Plus: Release 11.2.0.4.0 Production on Tue Jul 12 20:36:55 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production

ARROW:(DEMO@hawk):PRIMARY> drop table demo.t purge;

Table dropped.

ARROW:(DEMO@hawk):PRIMARY> create table demo.t(id int);

Table created.

ARROW:(DEMO@hawk):PRIMARY> create unique index demo.t_pk_ix on demo.t(id);

Index created.

ARROW:(DEMO@hawk):PRIMARY> alter table demo.t add constraint t_pk primary key(id) using index demo.t_pk_ix;

Table altered.

ARROW:(DEMO@hawk):PRIMARY>
Target: ggserr.log
2016-07-12 20:37:00  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [create table demo.t(id int) (size 27)].
2016-07-12 20:37:00  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation [create table "DEMO_REP"."T"(id int) (size 35)].

2016-07-12 20:37:00  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [create unique index demo.t_pk_ix on demo.t(id) (size 46)].
2016-07-12 20:37:00  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation [create unique index "DEMO_REP"."T_PK_IX" on "DEMO_REP"."T"(id) (size 62)].

2016-07-12 20:37:00  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [alter table demo.t add constraint t_pk primary key(id) using index demo.t_pk_ix (size 79)].
2016-07-12 20:37:00  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation 
[alter table "DEMO_REP"."T" add constraint t_pk primary key(id) using index demo.t_pk_ix (size 87)].

2016-07-12 20:37:00  ERROR   OGG-00516  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  Fatal error executing DDL replication: error 
[Error code [1418], ORA-01418: specified index does not exist SQL alter table "DEMO_REP"."T" add constraint t_pk primary key(id) using index demo.t_pk_ix 
/* GOLDENGATE_DDL_REPLICATION */], due to explicit ABEND error handling and filter [include all (default)].
2016-07-12 20:37:04  ERROR   OGG-01668  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  PROCESS ABENDING.

Notice using index demo.t_pk_ix did not get mapped accordingly.

How do we fixed this? The simple solution is don’t prefix schema name.
2016-07-12 20:53:39  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [create table y(id int) (size 22)].
2016-07-12 20:53:39  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation [create table "DEMO_REP"."Y"(id int) (size 35)].

2016-07-12 20:53:39  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [create unique index y_pk_ix on y(id) (size 36)].
2016-07-12 20:53:39  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation [create unique index "DEMO_REP"."Y_PK_IX" on "DEMO_REP"."Y"(id) (size 62)].

2016-07-12 20:53:40  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [alter table y add constraint y_pk primary key(id) using index y_pk_ix (size 69)].
2016-07-12 20:53:40  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation 
[alter table "DEMO_REP"."Y" add constraint y_pk primary key(id) using index y_pk_ix (size 82)].
I know what you are probably thinking. What if login to the schema is not allowed?

DDLOPTIONS MAPSESSIONSCHEMA DEMO, TARGET DEMO_REP, MAPSCHEMAS

2016-07-12 21:27:10  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [alter table demo.t add constraint t_pk primary key(id) using index demo.t_pk_ix (size 79)].
2016-07-12 21:27:10  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation 
[alter table "DEMO_REP"."T" add constraint t_pk primary key(id) using index demo.t_pk_ix (size 87)].
2016-07-12 21:27:10  INFO    OGG-01369  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL operation mapped to target database [T], new DDL operation 
[alter table "DEMO_REP"."T" add constraint t_pk primary key(id) using index DEMO_REP.t_pk_ix (size 91)].
2016-07-12 21:27:10  INFO    OGG-00487  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL operation included [INCLUDE MAPPED], optype [ALTER], objtype [TABLE], objowner [DEMO_REP], objname [T].
2016-07-12 21:27:10  INFO    OGG-01562  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  Source schema DEMO is mapped to target schema DEMO_REP to set the current schema for DDL execution.
2016-07-12 21:27:10  INFO    OGG-01407  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  Setting current schema for DDL operation to [DEMO_REP].
2016-07-12 21:27:10  INFO    OGG-00484  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  Executing DDL operation.
2016-07-12 21:27:10  INFO    OGG-00483  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL operation successful.
2016-07-12 21:27:10  INFO    OGG-01408  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  Restoring current schema for DDL operation to [GGS_ADMIN].
Happy Replication!

Reference: https://docs.oracle.com/goldengate/1212/gg-winux/GWURF/gg_parameters037.htm#GWURF451


Oracle JET and RequireJS

What is RequireJS and why is it important?

RequireJS is a JavaScript file and module loader. Oracle JET uses Require to load only the libraries and modules/components that are needed for a particular part of an Oracle JET application.

As the JavaScript world has taken off, web applications have grown large, and monolithic client.js files have become the norm. This type of code “organization” is difficult to maintain, read and test. In addition, more and more libraries, frameworks, plugins, etc. are being included in applications, making the loading of those resources complicated and slow. Truly, it is a waste to load every script file for each page of an application if it is not needed to run that particular page.

Require was born out of the need to reduce this code complexity. As such, it improves the speed and quality of our code. At its heart, RequireJS was designed to encourage and support modular development.

What is modular development?

Modular development separates out code into distinct functional units. This kind of organization is easy to maintain, easy to read (when coming into an existing project, for example), easy to test, and increases code re-usability. RequireJS supports the Asynchronous Module Definition (AMD) API for JavaScript modules. AMD has a particular way of encapsulating a module and embraces asynchronous loading of a module and its dependencies:

Factory Function

In this module, we call define with an array of the dependencies needed. The dependencies are passed into the factory function as arguments. Importantly, the function is only executed once the required dependencies are loaded.

What does Require look like in Oracle JET

In an Oracle JET application, RequireJS is set up in the main.js (aka “bootstrap”) file. First we need to configure the paths to the various scripts/libraries needed for the app. Here is an example of the RequireJS configuration in the main.js file of the Oracle JET QuickStart template. It establishes the names and paths to all of the various libraries necessary to run the application:

RequireJS configuration

 

Next we have the top-level “require” call which “starts”our application. It follows the AMD API method of encapsulating the module with the require, and passing in dependencies as an array of string values, then executing the callback function once the dependencies have loaded.

Top Level Require

Here we are requiring any scripts and modules needed to load the application, and subsequently calling the function that creates the initial view. Any other code which is used in the initial view of the application is also written here (routing, for example). Note, we only pass in the dependencies that we need to load the initial application, saving valuable resources.

Using RequireJS in other modules/viewModels

RequireJS is also used in the other JavaScript files of a JET application to define viewModels. The syntax used, however, is slightly different, and can be confusing. Let’s take a look:

View Model RequireJS Syntax

Here we are passing in an array of dependencies, but we’re using “define”, and not “require.” In short, “define” is used to facilitate module definition, while “require” is used to handle dependency loading. In a module definition, for example, we can utilize “require” WITHIN a module to fetch other dependencies dynamically. “Require” is typically used to load code in the top-level JavaScript file, and “define” is used to define a module, or distinct functional portion of the application.

Oracle JET makes use of RequireJS to support modular development. Require manages the many JavaScript files and module dependencies needed in an Oracle JET application. It simplifies and organizes the development process, and makes reading, writing and testing code much more straightforward.

The post Oracle JET and RequireJS appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

List all caches in Oracle

Tom Kyte - Tue, 2016-07-12 17:26
Hi Tom, I would like to list all caches in Oracle. Are there any usefull SQL queries which can help me to see some usesull performance data from Oracle's caches? BR, Peter
Categories: DBA Blogs

CASE statement

Tom Kyte - Tue, 2016-07-12 17:26
Hello Gurus, Please correct me if am framing wrong CASE statement using multiple columns (CASE WHEN (ENA_PRE1 IS NULL AND ENA_RT1 IS NULL) THEN RT_UNE IS NULL ELSE RT_UNE END) Thank you in Advance.
Categories: DBA Blogs

How to allow 100% CPU?

Tom Kyte - Tue, 2016-07-12 17:26
Hi I am running Oracle Database 11g Release 11.2.0.1.0 (Standard Edition) - 64bit Production on Windows 2008 R2, on a physical machine with a 4 core CPU. Whatever I do, the maximum CPU utilization of oracle.exe process is 25%. This is annoying b...
Categories: DBA Blogs

Column view definition in data dictionary

Tom Kyte - Tue, 2016-07-12 17:26
Hello there, Is it possible to get view column definition from data dictionary? I mean something similar as for tables - user_tab_columns. Thanks, Dusan
Categories: DBA Blogs

SELECT column from TABLE3, UPDATE column in TABLE2, INSERT all column in TABLE1

Tom Kyte - Tue, 2016-07-12 17:26
Hi I need to solve some difficult logic process. create table aa ( id int, name_child varchar2(25) ); create table bb ( id int, name_master varchar2(25) ); insert into bb values('-1', 'DUMMY'); bb is the master table, aa is the child...
Categories: DBA Blogs

difference b/w row database and column database, how the data will compressed in database

Tom Kyte - Tue, 2016-07-12 17:26
Hi Tom, I have to questions i.e; 1.what is difference b/w row database and column database, how can we create indexes on column database? 2.how the data will be compressed in database(on what basis data will be compressed in database)?
Categories: DBA Blogs

Using BULK COLLECT

Tom Kyte - Tue, 2016-07-12 17:26
Hi Tom, I am running into an issue while using BULK COLLECT INTO in a stored procedure. I use a cursor to get data (42 million rows, 70 columns) from a remote database (a table in Teradata), then insert those records into a table in Oracle. After ...
Categories: DBA Blogs

SQL Server 2016 AlwaysOn: Direct seeding and backup considerations

Yann Neuhaus - Tue, 2016-07-12 15:27

As promised in my prior blog post, we’ll talk now about direct seeding feature and backups. We found an interesting LIMIT_CONCURRENT_BACKUPS value for current state of the hadr_automatic_seeding_state_transition event. As said the last time, this value is meaningful by itself.

So let’s try to perform a backup while the database is concerned by an active seeding session. In fact, my backup was blocked (as I supposed) and here what I found from the sys.dm_tran_locks and sys.dm_exec_requests DMVs.

SELECT	
	r.command,
	r.wait_type,
	r.wait_resource,
	DB_NAME(tl.resource_database_id) as [database_name],
	tl.resource_type,
	tl.resource_subtype,
	tl.request_mode,
	tl.request_type,
	tl.request_status,
	r.session_id as blocked_session_id,
	r.blocking_session_id
FROM sys.dm_tran_locks as tl
join sys.dm_exec_requests as r
	on tl.request_session_id = r.session_id
WHERE tl.request_session_id = <concerned session>

 

  • The blocked session (my backup)

blog 100 - AG direct seeding - 1 - blocked session

  • Blocking session (my direct seeding session)

blog 100 - AG direct seeding - 2 - blocking session

In short, my backup session is waiting on database object with a LCK_M_U lock type that is already held by my direct seeding session. But the interesting part is that adding a database with direct seeding mode seems to be similar to a performing a backup operation from a locking perspective. We also recognize characteristic wait types that occur during a backup operation (ASYNC_IO_COMPLETION, BACKUPBUFFER). So direct seeding as a streaming backup? Well that sounds good!

Well, let’s continue with this other query

SELECT 
	r.command,
	r.wait_type,
	r.wait_resource,
	r.scheduler_id
from sys.dm_exec_requests as r
join sys.dm_os_schedulers as s
	on s.scheduler_id = r.scheduler_id
where r.session_id = 82
	AND s.status = 'VISIBLE ONLINE'

 

blog 100 - AG direct seeding - 3 - direct seeding internal

Do you notice the command text related to the direct seeding session? VDI_CLIENT_WORKER seems to indicate that SQL Server is using the virtual device interface internally during the seeding operation (VDI prefix as Virtual Device Interface). If SQL Server uses backup stream and VDI interface internally during seeding operation, it may explain why it has to limit concurrent backups in our case.

How to check if SQL Server uses VDI during direct seeding operation? Well, we know that using VDI implies to use sqlvdi.dll. So my first idea consisted in checking if the corresponding dll is loaded as module used by the sqlservr.exe process and the tasklist tool is a good candidate to answer to our need.

blog 100 - AG direct seeding - 4 - direct seeding sqlvdi dll

The above picture seems to confirm my assumption but this test is not perfect. Indeed, we may also claim that anything else of the  could have trigger the load of the sqlvdi.dll. So my second idea was to capture the stack dump from the windbg tool while running the seeding session and here what I was able to see:

blog 100 - AG direct seeding - 5 - direct seeding and windbg

We may recognize CHadrBuildSourceStateLimitConcurrentBackups (similar to value LIMIT_CONCURRENT_BACKUPS value from the hadr_automatic_seeding_state_transition event) what is a parameter to the function CallFunctAndResumeBuildTask(). Let’s also notice the use of sqlmin!CHadrBuildSourceStateSeeding::CreateVdiSession function that seems to confirm that SQL Server is using VDI session during seeding process.

Ok after this funny test, let’s back to something more important to keep in mind: we know that direct seeding will limit concurrent backups. Thus it will imply than our existing RPO may be impacted especially if the seeding operation takes a very long time.

Happy database seeding!!

 

 

 

Cet article SQL Server 2016 AlwaysOn: Direct seeding and backup considerations est apparu en premier sur Blog dbi services.

In-Database Archiving in Oracle Database - Data Archiving in 12C

Learn oracle 12c database management - Tue, 2016-07-12 14:29

In-Database Archiving in Oracle Database 12c - Data Archiving in 12C
Rather than deleting data physically, some applications have a concept of "mark for delete" logical delete, so the data remains present in the table, but is not visible to the application. This can be achieved by doing the following.

Add an extra column to the relevant tables that holds a flag to indicate the data is deleted.
Add an extra predicate to every statement that checks the deleted status, like "WHERE deleted = 'N'", to exclude the deleted rows from the SQL. The predicate can be hard coded into the SQL, or applied dynamically using a security policy, like in Virtual Private Database (VPD).

In-Database Archiving is a feature added to Oracle Database 12c to allow this type of "mark for delete" functionality out-of-the-box, with fewer changes to the existing application code.

Enable In-Database Archiving
Archiving (Deleting) Rows
Displaying Archived Rows


Enable In-Database Archiving

The ROW ARCHIVAL clause is used to enable in-database archiving. It can be used during table creation as part of the CREATE TABLE command, or after table creation using the ALTER TABLE command.

DROP TABLE ARCH_TEST PURGE;

-- Create the table with in-database with archiving of data enabled.
CREATE TABLE ARCH_TEST (
  id          NUMBER,
  description VARCHAR2(50),
  CONSTRAINT ARCH_TEST_pk PRIMARY KEY (id)
)
ROW ARCHIVAL;


-- Insert data into TEST TABLE ARCH_TEST
declare
n number;
VSQL Varchar(2000);
--i number:=1;
begin
for n in  1..750
loop
-- n := n+1;
VSQL:='insert into ARCH_TEST (id ,DESCRIPTION) values ('||n||',''VALUE OF N IS '||n||''')';
-- DBMS_OUTPUT.PUT_LINE(VSQL);
Execute Immediate VSQL;

end loop;
commit;
end;
/

-- Check the contents of the table.

SELECT COUNT(*) FROM ARCH_TEST;

  COUNT(*)
----------
       750

 
COLUMN column_name FORMAT A20
COLUMN data_type FORMAT A20

SELECT column_id,
       column_name,
       data_type,
       data_length,
       hidden_column
FROM   user_tab_cols
WHERE  table_name = 'ARCH_TEST'
ORDER BY column_id;


 COLUMN_ID COLUMN_NAME DATA_TYPE           DATA_LENGTH HID
----------          --------------------                      --------------------             -----------           ---
1 ID         NUMBER (22)        NO
2 DESCRIPTION VARCHAR2 (50)        NO
ORA_ARCHIVE_STATE VARCHAR2 (4000)        YES

  

By default, this column is populated with the value '0' for each row.

COLUMN ora_archive_state FORMAT A20

SELECT ora_archive_state, COUNT(*)
FROM   ARCH_TEST
GROUP BY ora_archive_state
ORDER BY ora_archive_state;   
  
ORA_ARCHIVE_STATE      COUNT(*)
-------------------- ----------
0                           750  
  
-- Disable, the re-enable in-database archiving.
ALTER TABLE ARCH_TEST NO ROW ARCHIVAL;
ALTER TABLE ARCH_TEST ROW ARCHIVAL;

************************************************************Archiving (Deleting) Rows************************************************************
Rather than deleting unneeded rows, update the ORA_ARCHIVE_STATE system generated hidden column with the value '1'. This will make the rows invisible to your applications.

UPDATE ARCH_TEST
SET    ora_archive_state = '1'
WHERE  id BETWEEN 550 and 750;
COMMIT;

SELECT COUNT(*) FROM ARCH_TEST;

  COUNT(*)
----------
       549

  
We can actually set ORA_ARCHIVE_STATE column to any string value other than '0' to archive the data, but the DBMS_ILM package uses the following constants.

ARCHIVE_STATE_ACTIVE='0'
ARCHIVE_STATE_ARCHIVED='1'

SQL>

************************************************************
Displaying Archived Rows
************************************************************

The hidden rows can be made visible to a session by setting ROW ARCHIVAL VISIBILITY to the value ALL. Setting it back to ACTIVE makes the rows invisible again.

-- Make archived rows visible.
ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ALL;

SELECT COUNT(*) FROM ARCH_TEST;

  COUNT(*)
----------
       750

SQL>


COLUMN ora_archive_state FORMAT A20

SELECT ora_archive_state, COUNT(*)
FROM   ARCH_TEST
GROUP BY ora_archive_state
ORDER BY ora_archive_state;

ORA_ARCHIVE_STATE      COUNT(*)
-------------------- ----------
0                           549
1                           201

2 rows selected.

SQL>


-- Make archived rows invisible again.ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ACTIVE;

SELECT COUNT(*) FROM ARCH_TEST;

 COUNT(*)
----------
       549

SQL>
Categories: DBA Blogs

Enrich Your Fluid Applications with Third-Party Integrations

PeopleSoft Technology Blog - Tue, 2016-07-12 14:04

Content provided by Balaji Pattabhiraman

PeopleSoft applications teams are delivering lots of greatcontent in their images for the Fluid User experience. In addition, though, there are many thingscustomers can do to enhance their applications to meet uniquerequirements. This post is the next in aseries of capabilities that PeopleTools provides.

In this post we'll cover how to invoke javascript APIs tointegrate third party elements within PeopleSoft Fluid pages. In this example, we're calling a google mapsAPI for Resource location. PeopleTools8.54 (and higher) provides the ability to create responsive Fluid components.In addition to making the layout responsive, PeopleTools provides developerswith mechanisms to run any javascript API (your own or third party) within yourPeopleSoft application.

We'll cover the following in this example:

  • Build a sample fluid application with two panellayout. The left panel loads a set of projects along with details such as thenumber of resources. The right panel loads the map.
  • Set a default so the user sees the map centeredto the user's current location using HTML5 Geolocation. Resources nearby their current location areplotted on the map.
  • Selecting a Project on the left panel updatesthe map and indicates where resources are located.
  • Searching for a location (using the googleplaces lookup API) centers the map to the selected location and shows theresources near the location in the map.

Note that this blog does not get into the details of theGoogle Maps or Places JavaScript API. Neither does Oracle PeopleSoft certify usage of Google maps. This is atechnology demonstration on how to integrate PeopleSoft with third partyJavaScript APIs. Please read the following on Custom JavaScriptGuidelines.

First let’s look at the completed application and how itbehaves for users, then we’ll show how you build a page to work like this.

Below is a PeopleSoft page.  When the page loads, the HTML5 geo location pops up asking for permission to use the user’s current location.  (If this is not granted, the map will center on the resource of the first project.)

Select Allow so that the map is centered based on the user's current location and the resources nearest that location.

Click on the caret tab to expand the left panel.  This shows us a list of projects.

In this scenario, we’ve selected the Implementation Project.  Notice that this project has 21 resources. The map now shows the resources by locations (using markers).

Clicking a marker (flag) shows the resource details.  In this case, we click on the Flag located on San Jose, CA, which shows us the 13 resources located at that location.

You can also search for resources at a different locationusing the Location box. This will usethe Place Finder to suggest locations as you type. Here we’ve entered Tokyo, Japan. This loads the map centered on that location and the resources availablenearest to the location.

In this example, the resource isn’t exactly geographically close!

So how did we achieve this?  The following are the high-level steps used to build this.

In App Designer, create a two panel layout using the delivered layout page PSL_TWOPANEL.  (Note that we provide many Fluid page templates in App Designer.)

This creates a default two panel layout page, where you can place your page fields.  Remember, in Fluid, designing a page no longer uses a WYSIWYG page design.  Instead, you have a series of group boxes into which you place your page elements.  The style sheets will take care of placement on the displayed screen.


Next we place the required fields on the page.  There are three main sections to consider:   1) A group box that will be the map field (named Map Canvas in the screen below)  2) A list grid that will hold the Project list, which is inside the subpage PC_MAP_PRJ_SBF, and 3) The section on top of the map (named PC_MAP_PNLHDR_SBF) that contains fields like the location.  We include them on the page illustrated below.  This example is available in a delivered Financials application (although it uses Oracle maps instead of Google maps).


Next we add the PeopleCode to load the list of project/resources and update the left panel with those details (list grid rows). The APIs to populate the grid rowset is as same as in Classic PeopleSoft, so we will skip those details in this blog.  In addition we can specify style classes to the page fields using the Fluid tab (Default Style Name) section on the form fields. These style classes can be delivered out of the box, or you can create your style sheet and include them for your page using the AddStyleSheet API in component post build. 

Now we get to the interesting part of populating the map.  First we define the JavaScript in an HTML file that will load the map initially or upon user action. You can write any JavaScript function routines.  You can look at example on google map JavaScript API doc here. These methods are defined in the HTML object PC_MAP_JS.  Here the method pc_loadMap is responsible for doing geo location, populating the map, enabling the places API and also calling the routine to set the markers.

The following lines enable the page's Location edit box (with page field name PACINPUT) as control for google places API input:

var input = document.getElementById('PACINPUT');
var searchBox = new google.maps.places.SearchBox(input);

The following lines associate the page's map canvas as the section for map. Initially, the map is centered by current location:

navigator.geolocation.getCurrentPosition(function(position) { posGeo = { lat: position.coords.latitude, lng: position.coords.longitude };
var map = new google.maps.Map(document.getElementById('%formnamedivPC_MAP_NUI_WRK2_MAP_CANVAS'), {
center: posGeo,
zoom: 10,
mapTypeId: google.maps.MapTypeId.ROADMAP
});

The following lines of code add the markers by looping through the resource details for a given project. The details are read from the database and stored in a page field in the component PeopleCode. It is read here in JavaScript to populate the markers:

for (var i = 0; i < gdata_PC_MAP_DATA_DVW$0.length; i++) {
var data = window.gdata_PC_MAP_DATA_DVW$0[i];
var marker = new google.maps.Marker({ position: {lat: parseFloat(resource[1]), lng: parseFloat(resource[2])}, map: map, icon: image, shape: shape, title: resource[5] });

The lines below associate the  event when you do a click a marker on the map to open the PeopleSoft modal page to display data. The openResourceInfoWindow will do a form submit (submitAction_%formname) to invoke the PeopleCode back from JavaScript to open the modal:

marker.addListener('click', function() {
     openResourceInfoWindow(marker, data[3],data[4], "M");
});
//Update the map and set the zoom level after adding the markers
map.setCenter(lasPos);
map.setZoom(5);


Now include the Style Sheet (if you have custom style classes applied on the page fields) and the JavaScript (we just defined) to be used for the component.


Now call the JavaScript function using AddOnLoadScript when the user selects a project on the left panel as part of the field change PeopleCode. (You can call JavaScript from PeopleCode events in fluid!).  This will reload the map.


You off and running!

Enrich Your Fluid Applications with Third-Party Integrations

PeopleSoft Technology Blog - Tue, 2016-07-12 14:04

Content provided by Balaji Pattabhiraman

PeopleSoft applications teams are delivering lots of great content in their images for the Fluid User experience. In addition, though, there are many things customers can do to enhance their applications to meet unique requirements. This post is the next in a series of capabilities that PeopleTools provides.

In this post we'll cover how to invoke javascript APIs to integrate third party elements within PeopleSoft Fluid pages. In this example, we're calling a google maps API for Resource location. PeopleTools 8.54 (and higher) provides the ability to create responsive Fluid components. In addition to making the layout responsive, PeopleTools provides developers with mechanisms to run any javascript API (your own or third party) within your PeopleSoft application.

We'll cover the following in this example:

  • Build a sample fluid application with two panel layout. The left panel loads a set of projects along with details such as the number of resources. The right panel loads the map.
  • Set a default so the user sees the map centered to the user's current location using HTML5 Geolocation. Resources nearby their current location are plotted on the map.
  • Selecting a Project on the left panel updates the map and indicates where resources are located.
  • Searching for a location (using the google places lookup API) centers the map to the selected location and shows the resources near the location in the map.

Note that this blog does not get into the details of the Google Maps or Places JavaScript API. Neither does Oracle PeopleSoft certify usage of Google maps. This is a technology demonstration on how to integrate PeopleSoft with third party JavaScript APIs. Please read the following on Custom JavaScript Guidelines.

First let’s look at the completed application and how it behaves for users, then we’ll show how you build a page to work like this.

Below is a PeopleSoft page.  When the page loads, the HTML5 geo location pops up asking for permission to use the user’s current location.  (If this is not granted, the map will center on the resource of the first project.)

Select Allow so that the map is centered based on the user's current location and the resources nearest that location.

Click on the caret tab to expand the left panel.  This shows us a list of projects.

In this scenario, we’ve selected the Implementation Project.  Notice that this project has 21 resources. The map now shows the resources by locations (using markers).

Clicking a marker (flag) shows the resource details.  In this case, we click on the Flag located on San Jose, CA, which shows us the 13 resources located at that location.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

You can also search for resources at a different location using the Location box. This will use the Place Finder to suggest locations as you type. Here we’ve entered Tokyo, Japan. This loads the map centered on that location and the resources available nearest to the location.

In this example, the resource isn’t exactly geographically close!

So how did we achieve this?  The following are the high-level steps used to build this.

In App Designer, create a two panel layout using the delivered layout page PSL_TWOPANEL.  (Note that we provide many Fluid page templates in App Designer.)

This creates a default two panel layout page, where you can place your page fields.  Remember, in Fluid, designing a page no longer uses a WYSIWYG page design.  Instead, you have a series of group boxes into which you place your page elements.  The style sheets will take care of placement on the displayed screen.

Next we place the required fields on the page.  There are three main sections to consider:   1) A group box that will be the map field (named Map Canvas in the screen below)  2) A list grid that will hold the Project list, which is inside the subpage PC_MAP_PRJ_SBF, and 3) The section on top of the map (named PC_MAP_PNLHDR_SBF) that contains fields like the location.  We include them on the page illustrated below.  This example is available in a delivered Financials application (although it uses Oracle maps instead of Google maps).

Next we add the PeopleCode to load the list of project/resources and update the left panel with those details (list grid rows). The APIs to populate the grid rowset is as same as in Classic PeopleSoft, so we will skip those details in this blog.  In addition we can specify style classes to the page fields using the Fluid tab (Default Style Name) section on the form fields. These style classes can be delivered out of the box, or you can create your style sheet and include them for your page using the AddStyleSheet API in component post build. 

Now we get to the interesting part of populating the map.  First we define the JavaScript in an HTML file that will load the map initially or upon user action. You can write any JavaScript function routines.  You can look at example on google map JavaScript API doc here. These methods are defined in the HTML object PC_MAP_JS.  Here the method pc_loadMap is responsible for doing geo location, populating the map, enabling the places API and also calling the routine to set the markers.

The following lines enable the page's Location edit box (with page field name PACINPUT) as control for google places API input:

var input = document.getElementById('PACINPUT');
var searchBox = new google.maps.places.SearchBox(input);

The following lines associate the page's map canvas as the section for map. Initially, the map is centered by current location:

navigator.geolocation.getCurrentPosition(function(position) { posGeo = { lat: position.coords.latitude, lng: position.coords.longitude };
var map = new google.maps.Map(document.getElementById('%formnamedivPC_MAP_NUI_WRK2_MAP_CANVAS'), {
center: posGeo,
zoom: 10,
mapTypeId: google.maps.MapTypeId.ROADMAP
});

The following lines of code add the markers by looping through the resource details for a given project. The details are read from the database and stored in a page field in the component PeopleCode. It is read here in JavaScript to populate the markers:

for (var i = 0; i < gdata_PC_MAP_DATA_DVW$0.length; i++) {
var data = window.gdata_PC_MAP_DATA_DVW$0[i];
var marker = new google.maps.Marker({ position: {lat: parseFloat(resource[1]), lng: parseFloat(resource[2])}, map: map, icon: image, shape: shape, title: resource[5] });

The lines below associate the  event when you do a click a marker on the map to open the PeopleSoft modal page to display data. The openResourceInfoWindow will do a form submit (submitAction_%formname) to invoke the PeopleCode back from JavaScript to open the modal:

marker.addListener('click', function() {
     openResourceInfoWindow(marker, data[3],data[4], "M");
});
//Update the map and set the zoom level after adding the markers
map.setCenter(lasPos);
map.setZoom(5);

Now include the Style Sheet (if you have custom style classes applied on the page fields) and the JavaScript (we just defined) to be used for the component.

Now call the JavaScript function using AddOnLoadScript when the user selects a project on the left panel as part of the field change PeopleCode. (You can call JavaScript from PeopleCode events in fluid!).  This will reload the map.

You off and running!

TNS-00583: Valid node checking: unable to parse configuration parameters

Learn oracle 12c database management - Tue, 2016-07-12 11:23
 TNS-12560: TNS:protocol adapter error  TNS-00583: Valid node checking: unable to parse configuration parametersI  Received following errors when trying to startup my listener. I verified the listener.ora and sqlnet.ora files and everything seemed to look normal.


[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 12-JUL-2016 10:05:32

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
TNS-12560: TNS:protocol adapter error
 TNS-00583: Valid node checking: unable to parse configuration parameters



Listener failed to start. See the error message(s) above...

[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 12-JUL-2016 10:07:41

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
TNS-12560: TNS:protocol adapter error
 TNS-00583: Valid node checking: unable to parse configuration parameters



Listener failed to start. See the error message(s) above...

  
I had below line in my sqlnet.ora file.

[oracle@Linux03 admin]$ cat sqlnet.ora_bak
# sqlnet.ora Network Configuration File: /u01/app/oracle/product/12.1.0.2/db_1/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ENCRYPTION_WALLET_LOCATION =
  (SOURCE =(METHOD = FILE)(METHOD_DATA =
    (DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)
   

Solution (in my case): Removing the ENCRYPTION_WALLET_LOCATION info did the trick for me.


[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 12-JUL-2016 10:31:41

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=Linux03)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                12-JUL-2016 10:31:41
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=Linux03)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully



Categories: DBA Blogs

TNS-00583: Valid node checking: unable to parse configuration parameters

Learn DB Concepts with me... - Tue, 2016-07-12 11:19
 TNS-12560: TNS:protocol adapter error  TNS-00583: Valid node checking: unable to parse configuration parametersI  Received following errors when trying to startup my listener. I verified the listener.ora and sqlnet.ora files and everything seemed to look normal.
 
[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 12-JUL-2016 10:05:32

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
TNS-12560: TNS:protocol adapter error
 TNS-00583: Valid node checking: unable to parse configuration parameters



Listener failed to start. See the error message(s) above...

[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 12-JUL-2016 10:07:41

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
TNS-12560: TNS:protocol adapter error
 TNS-00583: Valid node checking: unable to parse configuration parameters



Listener failed to start. See the error message(s) above...


I had below line in my sqlnet.ora file.

[oracle@Linux03 admin]$ cat sqlnet.ora_bak
# sqlnet.ora Network Configuration File: /u01/app/oracle/product/12.1.0.2/db_1/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

ENCRYPTION_WALLET_LOCATION =
  (SOURCE =(METHOD = FILE)(METHOD_DATA =
    (DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)  
  

Solution (In my case):

1. Tried to remove the ENCRYPTION_WALLET_LOCATION info and it worked for me. I knew there was somthing wrong with the syntax.
2. Copied the content from ENCRYPTION_WALLET* into notepad ++ . It helped me realize that I was missing two closing parenthesis )) at end.
3. Added them at end. Started the listener and worked.


[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 12-JUL-2016 10:31:41

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/db_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/db_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=Linux03)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                12-JUL-2016 10:31:41
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/db_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=Linux03)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
The listener supports no services
The command completed successfully


Also try these if above trick doesn't help.

1.  Oracle listener.ora or sqlnet.ora file contains any special characters.
2.  Oracle Listener.ora or sqlnet.ora file are in wrong format or syntax.
3.  Oracle listener.ora or sqlnet.ora file have some left justified parenthesis which are not accepted by oracle parser.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator