Feed aggregator

oracle lsitener config

Tom Kyte - Wed, 2016-07-13 11:46
I am trying to understand how the oracle listener gets its config info. I see this when I run lsnrctl status: Listener Parameter File /home/oracle/u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora Listener Log File /home/oracle/u0...
Categories: DBA Blogs

Archiving log buffer directly to archive logs

Tom Kyte - Wed, 2016-07-13 11:46
Is it possible to archive contents of log buffer directly to archive logs,instead of writing into redo logs. Will this decrease the load in system I/O. Please explain the reason if it is not possible.
Categories: DBA Blogs

Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery

Rittman Mead Consulting - Wed, 2016-07-13 09:02
Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery

Big Data Discovery (BDD) is a great tool for exploring, transforming, and visualising data stored in your organisation’s Data Reservoir. I presented a workshop on it at a recent conference, and got an interesting question from the audience that I thought I’d explore further here. Currently the primary route for getting data into BDD requires that it be (i) in HDFS and (ii) have a Hive table defined on top of it. From there, BDD automagically ingests the Hive table, or the dataprocessingCLI is manually called which prompts the BDD DGraph engine to go and sample (or read in full) the Hive dataset.

This is great, and works well where the dataset is vast (this is Big Data, after all) and needs the sampling that DGraph provides. It’s also simple enough for Hive tables that have already been defined, perhaps by another team. But - and this was the gist of the question that I got - what about where the Hive table doesn’t exist already? Because if it doesn’t, we now need to declare all the columns as well as choose the all-important SerDe in order to read the data.

SerDes are brilliant, in that they enable the application of a schema-on-read to data in many forms, but at the very early stages of a data project there are probably going to be lots of formats of data (such as TSV, CSV, JSON, as well as log files and so on) from varying sources. Choosing the relevant SerDe for each one, and making sure that BDD is also configured with the necessary jar, as well as manually listing each column to be defined in the table, adds overhead to the project. Wouldn’t it be nice if we could side-step this step somehow? In this article we’ll see how!

Importing Datasets through BDD Studio

Before we get into more fancy options, don’t forget that BDD itself offers the facility to upload CSV, TSV, and XLSX files, as well as connect to JDBC datasources. Data imported this way will be stored by BDD in a Hive table and ingested to DGraph.

Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery

This is great for smaller files held locally. But what about files on your BDD cluster, that are too large to upload from local machine, or in other formats - such as JSON?

Loading a CSV file

As we’ve just seen, CSV files can be imported to Hive/BDD directly through the GUI. But perhaps you’ve got a large CSV file sat local to BDD that you want to import? Or a folder full of varying CSV files that would be too time-consuming to upload through the GUI one-by-one?

For this we can use BDD Shell with the Python Pandas library, and I’m going to do so here through the excellent Jupyter Notebooks interface. You can read more about these here and details of how to configure them on BigDataLite 4.5 here. The great thing about notebooks, whether Jupyter or Zeppelin, is that I don’t need to write any more blog text here - I can simply embed the notebook inline and it is self-documenting:


(gist link)

Note that at end of this we call dataprocessingCLI to automatically bring the new table into BDD’s DGraph engine for use in BDD Studio. If you’ve got BDD configured to automagically add new Hive tables, or you don’t want to run this step, you can just comment it out.

Loading simple JSON data

Whilst CSV files are tabular by definition, JSON records can contain nested objects (recursively), as well as arrays. Let’s look at an example of using SparkSQL to import a simple flat JSON file, before then considering how we handle nested and array formats. Note that SparkSQL can read datasets from both local (file://) storage as well as HDFS (hdfs://):


(gist link)

Once loaded into Hive, it can be viewed in Hue:

Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery Loading nested JSON data

What’s been great so far, whether loading CSV, XLS, or simple JSON, is that we’ve not had to list out column names. All that needs modifying in the scripts above to import a different file with a different set of columns is to change the filename and the target tablename. Now we’re going to look at an example of a JSON file with nested objects - which is very common in JSON - and we’re going to have to roll our sleeves up a tad and start hardcoding some schema details.

First up, we import the JSON to a SparkSQL dataframe as before (although this time I’m loading it from HDFS, but local works too):

df = sqlContext.read.json('hdfs:///user/oracle/incoming/twitter/2016/07/12/')  

Then I declare this as a temporary table, which enables me to subsequently run queries with SQL against it


A very simple example of a SQL query would be to look at the record count:

result_df = sqlContext.sql("select count(*) from twitter")  

| _c0|  

The result of a sqlContext.sql invocation is a dataframe, which above I’m assigning to a new variable, but I could as easily run:

sqlContext.sql("select count(*) from twitter").show()  

for the same result.

The sqlContext has inferred the JSON schema automagically, and we can inspect it using


The twitter schema is huge, so I’m just quoting a few choice sections of it here to illustrate subsequent points:

|-- created_at: string (nullable = true)  
|-- entities: struct (nullable = true)  
|    |-- hashtags: array (nullable = true)  
|    |    |-- element: struct (containsNull = true)  
|    |    |    |-- indices: array (nullable = true)  
|    |    |    |    |-- element: long (containsNull = true)  
|    |    |    |-- text: string (nullable = true)  
|    |-- user_mentions: array (nullable = true)  
|    |    |-- element: struct (containsNull = true)  
|    |    |    |-- id: long (nullable = true)  
|    |    |    |-- id_str: string (nullable = true)  
|    |    |    |-- indices: array (nullable = true)  
|    |    |    |    |-- element: long (containsNull = true)  
|    |    |    |-- name: string (nullable = true)  
|    |    |    |-- screen_name: string (nullable = true)  
|-- source: string (nullable = true)  
|-- text: string (nullable = true)  
|-- timestamp_ms: string (nullable = true)  
|-- truncated: boolean (nullable = true)  
|-- user: struct (nullable = true)  
|    |-- followers_count: long (nullable = true)  
|    |-- following: string (nullable = true)  
|    |-- friends_count: long (nullable = true)  
|    |-- name: string (nullable = true)  
|    |-- screen_name: string (nullable = true)

Points to note about the schema:

  • In the root of the schema we have attributes such as text and created_at
  • There are nested elements (“struct”) such as user and within it screen_name, followers_count etc
  • There’s also array objects, where an attribute can occur more than one, such as hashtags, and user_mentions.

Accessing root and nested attributes is easy - we just use dot notation:

sqlContext.sql("SELECT created_at, user.screen_name, text FROM twitter").show()

|          created_at|   screen_name|                text|  
|Tue Jul 12 16:13:...|  Snehalstocks|"Students need to...|  
|Tue Jul 12 16:13:...|   KingMarkT93|Ga caya :( https:...|

We can save this as a dataframe that’s then persisted to Hive, for ingest into BDD:

subset02 = sqlContext.sql("SELECT created_at, user.screen_name, text FROM twitter")  
tablename = 'twitter_user_text'  
qualified_tablename='default.' + tablename  

Which in Hue looks like this:

Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery

Attributes in an array are a bit more tricky. Here’s an example tweet with multiple user_mentions and a hashtag too:


Here we use the LATERAL VIEW syntax, with the optional OUTER operator since not all tweets have these additional entities, and we want to make sure we show all tweets including those that don’t have these entities. Here’s the SQL formatted for reading:

SELECT id,  
text as tweet_text,  
hashtag.text as hashtag,  
user_mentions.screen_name as mentioned_user  
from twitter  
LATERAL VIEW OUTER explode(entities.user_mentions) user_mentionsTable as user_mentions  
LATERAL VIEW OUTER explode(entities.hashtags) hashtagsTable AS hashtag  

Which when run as from sqlContext.sql() gives us:

|                id|          created_at|    screen_name|          tweet_text|hashtag|    screen_name|  
|752940179569115136|Tue Jul 12 18:58:...|     flederbine|@johnnyq72 @orcld...|ImALLin|      johnnyq72|  
|752940179569115136|Tue Jul 12 18:58:...|     flederbine|@johnnyq72 @orcld...|ImALLin|       orcldoug|  
|752940179569115136|Tue Jul 12 18:58:...|     flederbine|@johnnyq72 @orcld...|ImALLin|          rmoff|  
|752940179569115136|Tue Jul 12 18:58:...|     flederbine|@johnnyq72 @orcld...|ImALLin|    markrittman|  
|752940179569115136|Tue Jul 12 18:58:...|     flederbine|@johnnyq72 @orcld...|ImALLin|     mikedurran|  

and written back to Hive for ingest to BDD:

Using SparkSQL and Pandas to Import Data into Hive and Big Data Discovery

You can use these SQL queries both for simply flattening JSON, as above, or for building summary tables, such as this one showing the most common hashtags in the dataset:

sqlContext.sql("SELECT hashtag.text,count(*) as inst_count from twitter  LATERAL VIEW OUTER explode(entities.hashtags) hashtagsTable AS hashtag GROUP BY hashtag.text order by inst_count desc").show(4)

|       text|inst_count|  
|     Hadoop|       165|  
|     Oracle|       151|  
|        job|       128|  
|    BigData|       112|

You can find the full Jupyter Notebook with all these nested/array JSON examples here:


(gist link)

You may decide after looking at this that you’d rather just go back to Hive and SerDes, and as is frequently the case in ‘data wrangling’ there’s multiple ways to achieve the same end. The route you take comes down to personal preference and familiarity with the toolsets. In this particular case I'd still go for SparkSQL for the initial exploration as it's quicker to 'poke around' the dataset than with defining and re-defining Hive tables -- YMMV. A final point to consider before we dig in is that SparkSQL importing JSON and saving back to HDFS/Hive is a static process, and if your underlying data is changing (e.g. streaming to HDFS from Flume) then you would probably want a Hive table over the HDFS file so that it is live when queried.

Loading an Excel workbook with many sheets

This was the use-case that led me to researching programmatic import of datasets in the first place. I was doing some work with a dataset of road traffic accident data, which included a single XLS file with over 30 sheets, each a lookup table for a separate set of dimension attributes. Importing each sheet one by one through the BDD GUI was tedious, and being a lazy geek, I looked to automate it.

Using Pandas read_excel function and a smidge of Python to loop through each sheet it was easily done. You can see the full notebook here:


(gist link)

Categories: BI & Warehousing

Consuming MCS Custom API using SDK for Android

Recently experimenting with the latest Oracle Mobile Cloud Service SDK for Android I was pleasantly surprised how easy you can call MCS Custom API using SDK with just a few lines of code. In common...

We share our skills to maximize your revenue!
Categories: DBA Blogs

VirtualBox 5.1

Tim Hall - Wed, 2016-07-13 07:16

VirtualBox 5.1 has been released!

Downloads and changelog in the usual places.

I’ve only done the installation on my Windows 7 desktop so far. It seems to be working fine at the moment. I’ll do the Mac and Linux ones when I get home tonight.



Update: Worked fine on “El Crapitan” too. Still haven’t tried a Linux host.

VirtualBox 5.1 was first posted on July 13, 2016 at 1:16 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.


Michael Dinh - Tue, 2016-07-12 23:36

This is probably not a 12.2 specific features and since I only tested for 12.2, I am not going to vouch for other versions.

Be careful when qualifying schema with DDL replication especially when the source and target schemas are not the same.

Example: Source schema is DEMO and Target schema is DEMO_REP

What happens to DDL replication when the following is used?

alter table demo.t add constraint t_pk primary key(id) using index demo.t_pk_ix;

$ sqlplus demo/demo @t.sql

SQL*Plus: Release Production on Tue Jul 12 20:36:55 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Release - 64bit Production

ARROW:(DEMO@hawk):PRIMARY> drop table demo.t purge;

Table dropped.

ARROW:(DEMO@hawk):PRIMARY> create table demo.t(id int);

Table created.

ARROW:(DEMO@hawk):PRIMARY> create unique index demo.t_pk_ix on demo.t(id);

Index created.

ARROW:(DEMO@hawk):PRIMARY> alter table demo.t add constraint t_pk primary key(id) using index demo.t_pk_ix;

Table altered.

Target: ggserr.log
2016-07-12 20:37:00  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [create table demo.t(id int) (size 27)].
2016-07-12 20:37:00  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation [create table "DEMO_REP"."T"(id int) (size 35)].

2016-07-12 20:37:00  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [create unique index demo.t_pk_ix on demo.t(id) (size 46)].
2016-07-12 20:37:00  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation [create unique index "DEMO_REP"."T_PK_IX" on "DEMO_REP"."T"(id) (size 62)].

2016-07-12 20:37:00  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [alter table demo.t add constraint t_pk primary key(id) using index demo.t_pk_ix (size 79)].
2016-07-12 20:37:00  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation 
[alter table "DEMO_REP"."T" add constraint t_pk primary key(id) using index demo.t_pk_ix (size 87)].

2016-07-12 20:37:00  ERROR   OGG-00516  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  Fatal error executing DDL replication: error 
[Error code [1418], ORA-01418: specified index does not exist SQL alter table "DEMO_REP"."T" add constraint t_pk primary key(id) using index demo.t_pk_ix 
/* GOLDENGATE_DDL_REPLICATION */], due to explicit ABEND error handling and filter [include all (default)].
2016-07-12 20:37:04  ERROR   OGG-01668  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  PROCESS ABENDING.

Notice using index demo.t_pk_ix did not get mapped accordingly.

How do we fixed this? The simple solution is don’t prefix schema name.
2016-07-12 20:53:39  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [create table y(id int) (size 22)].
2016-07-12 20:53:39  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation [create table "DEMO_REP"."Y"(id int) (size 35)].

2016-07-12 20:53:39  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [create unique index y_pk_ix on y(id) (size 36)].
2016-07-12 20:53:39  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation [create unique index "DEMO_REP"."Y_PK_IX" on "DEMO_REP"."Y"(id) (size 62)].

2016-07-12 20:53:40  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [alter table y add constraint y_pk primary key(id) using index y_pk_ix (size 69)].
2016-07-12 20:53:40  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation 
[alter table "DEMO_REP"."Y" add constraint y_pk primary key(id) using index y_pk_ix (size 82)].
I know what you are probably thinking. What if login to the schema is not allowed?


2016-07-12 21:27:10  INFO    OGG-00482  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL found, operation [alter table demo.t add constraint t_pk primary key(id) using index demo.t_pk_ix (size 79)].
2016-07-12 21:27:10  INFO    OGG-00489  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL is of mapped scope, after mapping new operation 
[alter table "DEMO_REP"."T" add constraint t_pk primary key(id) using index demo.t_pk_ix (size 87)].
2016-07-12 21:27:10  INFO    OGG-01369  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL operation mapped to target database [T], new DDL operation 
[alter table "DEMO_REP"."T" add constraint t_pk primary key(id) using index DEMO_REP.t_pk_ix (size 91)].
2016-07-12 21:27:10  INFO    OGG-00487  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL operation included [INCLUDE MAPPED], optype [ALTER], objtype [TABLE], objowner [DEMO_REP], objname [T].
2016-07-12 21:27:10  INFO    OGG-01562  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  Source schema DEMO is mapped to target schema DEMO_REP to set the current schema for DDL execution.
2016-07-12 21:27:10  INFO    OGG-01407  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  Setting current schema for DDL operation to [DEMO_REP].
2016-07-12 21:27:10  INFO    OGG-00484  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  Executing DDL operation.
2016-07-12 21:27:10  INFO    OGG-00483  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  DDL operation successful.
2016-07-12 21:27:10  INFO    OGG-01408  Oracle GoldenGate Delivery for Oracle, r_hawk.prm:  Restoring current schema for DDL operation to [GGS_ADMIN].
Happy Replication!

Reference: https://docs.oracle.com/goldengate/1212/gg-winux/GWURF/gg_parameters037.htm#GWURF451

Oracle JET and RequireJS

What is RequireJS and why is it important?

RequireJS is a JavaScript file and module loader. Oracle JET uses Require to load only the libraries and modules/components that are needed for a particular part of an Oracle JET application.

As the JavaScript world has taken off, web applications have grown large, and monolithic client.js files have become the norm. This type of code “organization” is difficult to maintain, read and test. In addition, more and more libraries, frameworks, plugins, etc. are being included in applications, making the loading of those resources complicated and slow. Truly, it is a waste to load every script file for each page of an application if it is not needed to run that particular page.

Require was born out of the need to reduce this code complexity. As such, it improves the speed and quality of our code. At its heart, RequireJS was designed to encourage and support modular development.

What is modular development?

Modular development separates out code into distinct functional units. This kind of organization is easy to maintain, easy to read (when coming into an existing project, for example), easy to test, and increases code re-usability. RequireJS supports the Asynchronous Module Definition (AMD) API for JavaScript modules. AMD has a particular way of encapsulating a module and embraces asynchronous loading of a module and its dependencies:

Factory Function

In this module, we call define with an array of the dependencies needed. The dependencies are passed into the factory function as arguments. Importantly, the function is only executed once the required dependencies are loaded.

What does Require look like in Oracle JET

In an Oracle JET application, RequireJS is set up in the main.js (aka “bootstrap”) file. First we need to configure the paths to the various scripts/libraries needed for the app. Here is an example of the RequireJS configuration in the main.js file of the Oracle JET QuickStart template. It establishes the names and paths to all of the various libraries necessary to run the application:

RequireJS configuration


Next we have the top-level “require” call which “starts”our application. It follows the AMD API method of encapsulating the module with the require, and passing in dependencies as an array of string values, then executing the callback function once the dependencies have loaded.

Top Level Require

Here we are requiring any scripts and modules needed to load the application, and subsequently calling the function that creates the initial view. Any other code which is used in the initial view of the application is also written here (routing, for example). Note, we only pass in the dependencies that we need to load the initial application, saving valuable resources.

Using RequireJS in other modules/viewModels

RequireJS is also used in the other JavaScript files of a JET application to define viewModels. The syntax used, however, is slightly different, and can be confusing. Let’s take a look:

View Model RequireJS Syntax

Here we are passing in an array of dependencies, but we’re using “define”, and not “require.” In short, “define” is used to facilitate module definition, while “require” is used to handle dependency loading. In a module definition, for example, we can utilize “require” WITHIN a module to fetch other dependencies dynamically. “Require” is typically used to load code in the top-level JavaScript file, and “define” is used to define a module, or distinct functional portion of the application.

Oracle JET makes use of RequireJS to support modular development. Require manages the many JavaScript files and module dependencies needed in an Oracle JET application. It simplifies and organizes the development process, and makes reading, writing and testing code much more straightforward.

The post Oracle JET and RequireJS appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

List all caches in Oracle

Tom Kyte - Tue, 2016-07-12 17:26
Hi Tom, I would like to list all caches in Oracle. Are there any usefull SQL queries which can help me to see some usesull performance data from Oracle's caches? BR, Peter
Categories: DBA Blogs

CASE statement

Tom Kyte - Tue, 2016-07-12 17:26
Hello Gurus, Please correct me if am framing wrong CASE statement using multiple columns (CASE WHEN (ENA_PRE1 IS NULL AND ENA_RT1 IS NULL) THEN RT_UNE IS NULL ELSE RT_UNE END) Thank you in Advance.
Categories: DBA Blogs

How to allow 100% CPU?

Tom Kyte - Tue, 2016-07-12 17:26
Hi I am running Oracle Database 11g Release (Standard Edition) - 64bit Production on Windows 2008 R2, on a physical machine with a 4 core CPU. Whatever I do, the maximum CPU utilization of oracle.exe process is 25%. This is annoying b...
Categories: DBA Blogs

Column view definition in data dictionary

Tom Kyte - Tue, 2016-07-12 17:26
Hello there, Is it possible to get view column definition from data dictionary? I mean something similar as for tables - user_tab_columns. Thanks, Dusan
Categories: DBA Blogs

SELECT column from TABLE3, UPDATE column in TABLE2, INSERT all column in TABLE1

Tom Kyte - Tue, 2016-07-12 17:26
Hi I need to solve some difficult logic process. create table aa ( id int, name_child varchar2(25) ); create table bb ( id int, name_master varchar2(25) ); insert into bb values('-1', 'DUMMY'); bb is the master table, aa is the child...
Categories: DBA Blogs

difference b/w row database and column database, how the data will compressed in database

Tom Kyte - Tue, 2016-07-12 17:26
Hi Tom, I have to questions i.e; 1.what is difference b/w row database and column database, how can we create indexes on column database? 2.how the data will be compressed in database(on what basis data will be compressed in database)?
Categories: DBA Blogs


Tom Kyte - Tue, 2016-07-12 17:26
Hi Tom, I am running into an issue while using BULK COLLECT INTO in a stored procedure. I use a cursor to get data (42 million rows, 70 columns) from a remote database (a table in Teradata), then insert those records into a table in Oracle. After ...
Categories: DBA Blogs

SQL Server 2016 AlwaysOn: Direct seeding and backup considerations

Yann Neuhaus - Tue, 2016-07-12 15:27

As promised in my prior blog post, we’ll talk now about direct seeding feature and backups. We found an interesting LIMIT_CONCURRENT_BACKUPS value for current state of the hadr_automatic_seeding_state_transition event. As said the last time, this value is meaningful by itself.

So let’s try to perform a backup while the database is concerned by an active seeding session. In fact, my backup was blocked (as I supposed) and here what I found from the sys.dm_tran_locks and sys.dm_exec_requests DMVs.

	DB_NAME(tl.resource_database_id) as [database_name],
	r.session_id as blocked_session_id,
FROM sys.dm_tran_locks as tl
join sys.dm_exec_requests as r
	on tl.request_session_id = r.session_id
WHERE tl.request_session_id = <concerned session>


  • The blocked session (my backup)

blog 100 - AG direct seeding - 1 - blocked session

  • Blocking session (my direct seeding session)

blog 100 - AG direct seeding - 2 - blocking session

In short, my backup session is waiting on database object with a LCK_M_U lock type that is already held by my direct seeding session. But the interesting part is that adding a database with direct seeding mode seems to be similar to a performing a backup operation from a locking perspective. We also recognize characteristic wait types that occur during a backup operation (ASYNC_IO_COMPLETION, BACKUPBUFFER). So direct seeding as a streaming backup? Well that sounds good!

Well, let’s continue with this other query

from sys.dm_exec_requests as r
join sys.dm_os_schedulers as s
	on s.scheduler_id = r.scheduler_id
where r.session_id = 82
	AND s.status = 'VISIBLE ONLINE'


blog 100 - AG direct seeding - 3 - direct seeding internal

Do you notice the command text related to the direct seeding session? VDI_CLIENT_WORKER seems to indicate that SQL Server is using the virtual device interface internally during the seeding operation (VDI prefix as Virtual Device Interface). If SQL Server uses backup stream and VDI interface internally during seeding operation, it may explain why it has to limit concurrent backups in our case.

How to check if SQL Server uses VDI during direct seeding operation? Well, we know that using VDI implies to use sqlvdi.dll. So my first idea consisted in checking if the corresponding dll is loaded as module used by the sqlservr.exe process and the tasklist tool is a good candidate to answer to our need.

blog 100 - AG direct seeding - 4 - direct seeding sqlvdi dll

The above picture seems to confirm my assumption but this test is not perfect. Indeed, we may also claim that anything else of the  could have trigger the load of the sqlvdi.dll. So my second idea was to capture the stack dump from the windbg tool while running the seeding session and here what I was able to see:

blog 100 - AG direct seeding - 5 - direct seeding and windbg

We may recognize CHadrBuildSourceStateLimitConcurrentBackups (similar to value LIMIT_CONCURRENT_BACKUPS value from the hadr_automatic_seeding_state_transition event) what is a parameter to the function CallFunctAndResumeBuildTask(). Let’s also notice the use of sqlmin!CHadrBuildSourceStateSeeding::CreateVdiSession function that seems to confirm that SQL Server is using VDI session during seeding process.

Ok after this funny test, let’s back to something more important to keep in mind: we know that direct seeding will limit concurrent backups. Thus it will imply than our existing RPO may be impacted especially if the seeding operation takes a very long time.

Happy database seeding!!




Cet article SQL Server 2016 AlwaysOn: Direct seeding and backup considerations est apparu en premier sur Blog dbi services.

Enrich Your Fluid Applications with Third-Party Integrations

PeopleSoft Technology Blog - Tue, 2016-07-12 14:04

Content provided by Balaji Pattabhiraman

PeopleSoft applications teams are delivering lots of great content in their images for the Fluid User experience. In addition, though, there are many things customers can do to enhance their applications to meet unique requirements. This post is the next in a series of capabilities that PeopleTools provides.

In this post we'll cover how to invoke javascript APIs to integrate third party elements within PeopleSoft Fluid pages. In this example, we're calling a google maps API for Resource location. PeopleTools 8.54 (and higher) provides the ability to create responsive Fluid components. In addition to making the layout responsive, PeopleTools provides developers with mechanisms to run any javascript API (your own or third party) within your PeopleSoft application.

We'll cover the following in this example:

  • Build a sample fluid application with two panel layout. The left panel loads a set of projects along with details such as the number of resources. The right panel loads the map.
  • Set a default so the user sees the map centered to the user's current location using HTML5 Geolocation. Resources nearby their current location are plotted on the map.
  • Selecting a Project on the left panel updates the map and indicates where resources are located.
  • Searching for a location (using the google places lookup API) centers the map to the selected location and shows the resources near the location in the map.

Note that this blog does not get into the details of the Google Maps or Places JavaScript API. Neither does Oracle PeopleSoft certify usage of Google maps. This is a technology demonstration on how to integrate PeopleSoft with third party JavaScript APIs. Please read the following on Custom JavaScript Guidelines.

First let’s look at the completed application and how it behaves for users, then we’ll show how you build a page to work like this.

Below is a PeopleSoft page.  When the page loads, the HTML5 geo location pops up asking for permission to use the user’s current location.  (If this is not granted, the map will center on the resource of the first project.)

Select Allow so that the map is centered based on the user's current location and the resources nearest that location.

Click on the caret tab to expand the left panel.  This shows us a list of projects.

In this scenario, we’ve selected the Implementation Project.  Notice that this project has 21 resources. The map now shows the resources by locations (using markers).

Clicking a marker (flag) shows the resource details.  In this case, we click on the Flag located on San Jose, CA, which shows us the 13 resources located at that location.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

You can also search for resources at a different location using the Location box. This will use the Place Finder to suggest locations as you type. Here we’ve entered Tokyo, Japan. This loads the map centered on that location and the resources available nearest to the location.

In this example, the resource isn’t exactly geographically close!

So how did we achieve this?  The following are the high-level steps used to build this.

In App Designer, create a two panel layout using the delivered layout page PSL_TWOPANEL.  (Note that we provide many Fluid page templates in App Designer.)

This creates a default two panel layout page, where you can place your page fields.  Remember, in Fluid, designing a page no longer uses a WYSIWYG page design.  Instead, you have a series of group boxes into which you place your page elements.  The style sheets will take care of placement on the displayed screen.

Next we place the required fields on the page.  There are three main sections to consider:   1) A group box that will be the map field (named Map Canvas in the screen below)  2) A list grid that will hold the Project list, which is inside the subpage PC_MAP_PRJ_SBF, and 3) The section on top of the map (named PC_MAP_PNLHDR_SBF) that contains fields like the location.  We include them on the page illustrated below.  This example is available in a delivered Financials application (although it uses Oracle maps instead of Google maps).

Next we add the PeopleCode to load the list of project/resources and update the left panel with those details (list grid rows). The APIs to populate the grid rowset is as same as in Classic PeopleSoft, so we will skip those details in this blog.  In addition we can specify style classes to the page fields using the Fluid tab (Default Style Name) section on the form fields. These style classes can be delivered out of the box, or you can create your style sheet and include them for your page using the AddStyleSheet API in component post build. 

Now we get to the interesting part of populating the map.  First we define the JavaScript in an HTML file that will load the map initially or upon user action. You can write any JavaScript function routines.  You can look at example on google map JavaScript API doc here. These methods are defined in the HTML object PC_MAP_JS.  Here the method pc_loadMap is responsible for doing geo location, populating the map, enabling the places API and also calling the routine to set the markers.

The following lines enable the page's Location edit box (with page field name PACINPUT) as control for google places API input:

var input = document.getElementById('PACINPUT');
var searchBox = new google.maps.places.SearchBox(input);

The following lines associate the page's map canvas as the section for map. Initially, the map is centered by current location:

navigator.geolocation.getCurrentPosition(function(position) { posGeo = { lat: position.coords.latitude, lng: position.coords.longitude };
var map = new google.maps.Map(document.getElementById('%formnamedivPC_MAP_NUI_WRK2_MAP_CANVAS'), {
center: posGeo,
zoom: 10,
mapTypeId: google.maps.MapTypeId.ROADMAP

The following lines of code add the markers by looping through the resource details for a given project. The details are read from the database and stored in a page field in the component PeopleCode. It is read here in JavaScript to populate the markers:

for (var i = 0; i < gdata_PC_MAP_DATA_DVW$0.length; i++) {
var data = window.gdata_PC_MAP_DATA_DVW$0[i];
var marker = new google.maps.Marker({ position: {lat: parseFloat(resource[1]), lng: parseFloat(resource[2])}, map: map, icon: image, shape: shape, title: resource[5] });

The lines below associate the  event when you do a click a marker on the map to open the PeopleSoft modal page to display data. The openResourceInfoWindow will do a form submit (submitAction_%formname) to invoke the PeopleCode back from JavaScript to open the modal:

marker.addListener('click', function() {
     openResourceInfoWindow(marker, data[3],data[4], "M");
//Update the map and set the zoom level after adding the markers

Now include the Style Sheet (if you have custom style classes applied on the page fields) and the JavaScript (we just defined) to be used for the component.

Now call the JavaScript function using AddOnLoadScript when the user selects a project on the left panel as part of the field change PeopleCode. (You can call JavaScript from PeopleCode events in fluid!).  This will reload the map.

You off and running!

TNS-00583: Valid node checking: unable to parse configuration parameters

Learn DB Concepts with me... - Tue, 2016-07-12 11:19
 TNS-12560: TNS:protocol adapter error  TNS-00583: Valid node checking: unable to parse configuration parametersI  Received following errors when trying to startup my listener. I verified the listener.ora and sqlnet.ora files and everything seemed to look normal.
[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version - Production on 12-JUL-2016 10:05:32

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/ please wait...

TNSLSNR for Linux: Version - Production
System parameter file is /u01/app/oracle/product/
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
TNS-12560: TNS:protocol adapter error
 TNS-00583: Valid node checking: unable to parse configuration parameters

Listener failed to start. See the error message(s) above...

[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version - Production on 12-JUL-2016 10:07:41

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/ please wait...

TNSLSNR for Linux: Version - Production
System parameter file is /u01/app/oracle/product/
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=Linux03)(PORT=1521)))
TNS-12560: TNS:protocol adapter error
 TNS-00583: Valid node checking: unable to parse configuration parameters

Listener failed to start. See the error message(s) above...

I had below line in my sqlnet.ora file.

[oracle@Linux03 admin]$ cat sqlnet.ora_bak
# sqlnet.ora Network Configuration File: /u01/app/oracle/product/
# Generated by Oracle configuration tools.


    (DIRECTORY = /u01/app/oracle/admin/$ORACLE_SID/encryption_keystore/)  

Solution (In my case):

1. Tried to remove the ENCRYPTION_WALLET_LOCATION info and it worked for me. I knew there was somthing wrong with the syntax.
2. Copied the content from ENCRYPTION_WALLET* into notepad ++ . It helped me realize that I was missing two closing parenthesis )) at end.
3. Added them at end. Started the listener and worked.

[oracle@Linux03 encryption_keystore]$ lsnrctl start

LSNRCTL for Linux: Version - Production on 12-JUL-2016 10:31:41

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/ please wait...

TNSLSNR for Linux: Version - Production
System parameter file is /u01/app/oracle/product/
Log messages written to /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=Linux03)(PORT=1521)))

Alias                     LISTENER
Version                   TNSLSNR for Linux: Version - Production
Start Date                12-JUL-2016 10:31:41
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/
Listener Log File         /u01/app/oracle/diag/tnslsnr/Linux03/listener/alert/log.xml
Listening Endpoints Summary...
The listener supports no services
The command completed successfully

Also try these if above trick doesn't help.

1.  Oracle listener.ora or sqlnet.ora file contains any special characters.
2.  Oracle Listener.ora or sqlnet.ora file are in wrong format or syntax.
3.  Oracle listener.ora or sqlnet.ora file have some left justified parenthesis which are not accepted by oracle parser.
Categories: DBA Blogs

Eight Ways To Ensure Your Applications Are Enterprise-Ready

Pythian Group - Tue, 2016-07-12 10:00

When it comes to building database applications and solutions, developers, DBAs, engineers and architects have a lot of new and exciting tools and technologies to play with, especially with the Hadoop and NoSQL environments growing so rapidly.

While it’s easy to geek out about these cool and revolutionary new technologies, at some point in the development cycle you’ll need to stop to consider the real-world business implications of the application you’re proposing. After all, you’re bound to face some tough questions, like:

Why did you choose that particular database for our mission-critical application? Can your team provide 24/7 support for the app? Do you have a plan to train people on this new technology? Do we have the right hardware infrastructure to support the app’s deployment? How are you going to ensure there won’t be any bugs or security vulnerabilities?

If you don’t have a plan for navigating and anticipating these kinds of questions in advance, you’re likely to face difficulty getting approval for and implementing your application.

Any database applications or solutions you build or adopt for your organization must be “enterprise-ready”: secure, stable and scalable with a proven, tested capacity for deployment. They must be easy to administer and easy to support. But how do you make sure that happens?

Here are eight things to consider before declaring your proposed solution enterprise-ready:

  1. Open communications: A close working relationship between the development and operations teams goes a long way toward seamless integration of your database applications. By working together (from start to finish, as early on as possible), you can better anticipate the issues to be solved so your app or solution gets up and running faster.
  2. Platform reliability: Open source databases are great for obvious reasons: they’re free and easily customizable. But if your app is revenue-generating or mission-critical, it’s better to use a tested and proven distribution platform like Datastax Enterprise for Cassandra, Cloudera or HortonWorks for Hadoop, and Oracle or Percona for MySQL.
  3. Continuous quality: No matter which technology you use to build your app, make sure it passes rigorous quality assurance, scale and performance testing — both initially and with every new release of the software. Your vendor also needs to be proactive when it comes to releasing patches and fixing bugs.
  4. Suitable infrastructure: Consider whether the infrastructure you’re running is appropriate for the app you’re developing. If the database is set to run on multiple nodes of commodity hardware — to cover your bases in case one fails — but your operations team likes to store everything on an expensive SAN device, you might want to look into other alternatives.
  5. Experienced hosting: You’ll want to find a hosting company that is reliable, cost-effective and meets your company’s security policies. It should also have experience hosting the database technology you plan on using; that way, it knows how to respond when issues or challenges arise.
  6. Expert talent: Bring in a team of experts that can support your entire environment. While your operations team may want to take care of some elements themselves (everything up to the OS level, for instance), you’ll still want to show them that you have 24/7 support coverage available if needed. This team should be committed to continuous training and have enough people skilled with your technology to provide uninterrupted coverage.
  7. Comprehensive skills: Your team should be able to check your configurations against best practices for security, performance and availability — but don’t forget to ensure that they’re also set up for the more mundane things like systems monitoring, responding to alerts and fault finding.
  8. Ongoing costs: When tallying the running cost of your application, keep in mind that you need to incorporate the cost of the distributed version, its hosting, and 24/7 support and optimization.

With all the elements that go into getting an application enterprise-ready, it might be easier to work with a reputable partner who has the experience and expertise to help you deploy the right solution for your organization and ensure its long-term success.

Find out how Pythian’s solutions can help you succeed.

Categories: DBA Blogs

ADF BC REST Authentication with JSESSIONID Cookie

Andrejus Baranovski - Tue, 2016-07-12 09:46
I have described how to apply ADF Security for ADF BC REST in my previous post - Oracle JET and ADF BC REST Basic Authentication. I will show how you can authenticate on first request and for the next requests rely on JSESSIONID cookie from the first request. This is useful for mobile clients and JET, there is no need to keep user credentials during requests (enough to keep cookie), as this is sensitive data.

Let's see how it works. In the first request we must authenticate. We are going to use basic authentication with GET operation and provide user credentials:

Request is authenticated and data is returned:

Along with data, extra information is returned with response - cookie and header. Cookie JSESSIONID value identifies authenticated session context. We can use this value for the next requests, this way server would assume us as trusted and would not ask to authenticate again (similar principle as in ADF web). Copy cookie value:

Now you can close and open Postman, to guarantee nothing is shared from previous request. Remove authentication header and add new header variable called Cookie. Value must be set as JSESSIONID=Cookie Value. Execute GET operation:

If session associated to this cookie is still active on the server, you will get response data, as it would be expected (no need to provide user credentials again):

ADF BC REST sample application - ADFBCRESTApp_v7.zip is protected by ADF Security with authentication and authorization:

REST servlet is mapped with appropriate security constraint in web.xml:

Remote DBA Benefits Comparison Series- Part 4- Can I Still Get Personalized Service Remotely?

Chris Foot - Tue, 2016-07-12 09:25

This is Part 4 of a series that compares in-house personnel, consultants and remote DBA service providers.  In Part 1, we talked about the cost reduction benefits that remote DBA service providers offer customers.  In Part 2, we discussed service continuity and quality as well as the flexibility benefits customers receive when utilizing a remote DBA service provider.  In Part 3, we reviewed the benefits of collective knowledge and deep dive subject matter expertise.

In part 4 of the Remote DBA Benefits Comparison Series, we’ll discuss how remote DBA providers deliver personalized services to customers.  I use RDX’s service delivery approach as an example of how remote DBA providers can exceed the quality of work performed by in-house personnel or consultants.


Subscribe to Oracle FAQ aggregator