Skip navigation.

Feed aggregator

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2014-11-18 18:35

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2014-11-18 18:35

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Bordering Text

Tim Dexter - Tue, 2014-11-18 16:08

A tough little question appeared on one of our internal mailing lists today that piqued my interest. A customer wanted to place a border around all data fields in their BIP output. Something like this:


Naturally you think of using a table, embedding the field inside a cell and turning the cell border on. That will work but will need some finessing to get the cells to stretch or shrink depending on the width of the runtime text. Then things might get a bit squirly (technical term) if the text is wide enough to force a new line at the page edge. Anyway, it will get messy. So I took a look at the problem to see if the fields can be isolated in the page as far as the XSLFO code is concerned. If the field can be siolated in its own XSL block then we can change attribute values to get the borders to show just around the field. Sadly not.

This is an embedded field YEARPARAM in a sentence.

translates to

 <fo:inline height="0.0pt" style-name="Normal" font-size="11.0pt" style-id="s0" white-space-collapse="false" 
  font-family-generic="swiss" font-family="Calibri" 
  xml:space="preserve">This is an embedded field <xsl:value-of select="(.//YEARPARAM)[1]" xdofo:field-name="YEARPARAM"/> in a sentence.</fo:inline>


If we change the border on tis, it will apply to the complete sentence. not just the field.
So how could I isolate that field. Well we could actually do anything to the field. embolden, italicize, etc ... I settled on changing the background color (its easy to change it back with a single attribute call.) Using the highlighter tool on the Home tab in Word I change the field to have a yellow background. I now have:

 This gives me the following code.

<fo:block linefeed-treatment="preserve" text-align="start" widows="2" end-indent="5.4pt" orphans="2"
 start-indent="5.4pt" height="0.0pt" padding-top="0.0pt" padding-bottom="10.0pt" xdofo:xliff-note="YEARPARAM" xdofo:line-spacing="multiple:13.8pt"> 
 <fo:inline height="0.0pt" style-name="Normal" font-size="11.0pt" style-id="s0" white-space-collapse="false" 
  font-family-generic="swiss" font-family="Calibri" xml:space="preserve">This is an embedded field </fo:inline>
  <fo:inline height="0.0pt" style-name="Normal" font-size="11.0pt" style-id="s0" white-space-collapse="false" 
   font-family-generic="swiss" font-family="Calibri" background-color="#ffff00">
    <xsl:attribute name="background-color">white</xsl:attribute> <xsl:value-of select="(.//YEARPARAM)[1]" xdofo:field-name="YEARPARAM"/> 
  </fo:inline> 
 <fo:inline height="0.0pt" style-name="Normal" font-size="11.0pt" style-id="s0" white-space-collapse="false" 
  font-family-generic="swiss" font-family="Calibri" xml:space="preserve"> in a sentence.</fo:inline> 
</fo:block> 

Now we have the field isolated we can easily set other attributes that will only be applied to the field and nothing else. I added the following to my YEARPARAM field:

<?attribute@inline:background-color;'white'?> >>> turn the background back to white
<?attribute@inline:border-color;'black'?> >>> turn on all borders and make black
<?attribute@inline:border-width;'0.5pt'?> >>> make the border 0.5 point wide
<?YEARPARAM?> >>> my original field

The @inline tells the BIP XSL engine to only apply the attribute values to the immediate 'inline' code block i.e. the field. Collapse all of this code into a single line in the field.
When I run the template now, I see the following:

 


Its a little convoluted but if you ignore the geeky code explanation and just highlight/copy'n'paste, its pretty straightforward.

Categories: BI & Warehousing

Capturing SQL Server IO Latencies for a Period of Time with PowerShell

Pythian Group - Tue, 2014-11-18 14:54

Today’s blog post is a demonstration of the PowerShell approach to Paul Randal’s recent blog post, Capturing IO Latencies for a Period of Time. Since wait statistics were properly implemented in SQL Server, it turned into a powerful resource to diagnose and troubleshoot SQL Server issues.

Paul Randal is, IMHO, one of  the best names in this technology.  I requested his approval to use and modify his code to demonstrate another approach that can be used not only in this situation, but in others where you need to capture data in an interval and  don’t want to use SQL Server resources. Paul, as always, was very kind to allow me to use his implementation to demonstrate mine—thanks a lot, Paul.

In his excellent blog post Capturing IO Latencies for a Period of Time, Paul demonstrates how to capture the wait stats in an interval of 30  minutes using only T-SQL.

If you have a very busy system perhaps leaving this job to trigger the gathering to another resource than SQL Server may be a good idea. Those who know me, know that I am PowerShell lover, so let’s take a look how to do that using PowerShell.

Before we proceed, I strongly encourage readers to check out my good friend Ed Wilson’s fantastic blog post Use Asynchronous Event Handling in PowerShell to understand timer event handlers, asynchronous .NET events, or the cmdlets to work with events.

A quick note about .NET System.Timers.Timer class. 

It is simple—the namespace System.Timers fires an event in a specific interval and the class Timer generates recurring events. In other words, I can set up an event to be fired on each x unit of time (milliseconds.)

The first thing to do it it is to create the object System.Timers.Timer and define the interval that we want. For now, let’s use 500 milliseconds  by setting the property interval.

$Timer= New-Object System.Timers.Timer -Property @{ Interval = 100;autoreset=$true }

We instanced the timer and set up the interval. Now we need to register the object (in the case object $timer) that will fire the event and execute the action. For that we will use the Register-Objectevent cmdlet The parameter Eventname we will use Elapsed that is detailed in Use Asynchronous Event Handling in PowerShell  and also in Manage Event Subscriptions with PowerShell.

First, let’s just print something on the screen:

Register-ObjectEvent -InputObject$timer  -Action { Write-Host‘Holy PowerShell. It Worked!!!! ‘} -EventName Elapsed -SourceIdentifier stateful

Id              Name            State      HasMoreData     Location             Command

–              —-            —–      ———–     ——–             ——-

1               93ce50c3-6f5… NotStarted False                                 Write-Host ‘Holy Powe…

If you simply run this code, you will realize that after 30 seconds nothing happens.

Wait, PowerShell is broken? No my dear friend, the .net class is so beautiful that it allows you start and stop the suppression of the event. Yes, you can start the event and will be recurring every 30 seconds, and also stop it.

So let’s start:

$Timer.Start()

Holy PowerShell. It Worked!!!!

Holy PowerShell. It Worked!!!!

Holy PowerShell. It Worked!!!!

Holy PowerShell. It Worked!!!!

Holy PowerShell. It Worked!!!!

Holy PowerShell. It Worked!!!!

Holy PowerShell. It Worked!!!!

And to stop it :

$Timer.Stop()

 

And to Unregister the event use :

Unregister-Event -SourceIdentifier stateful

How PowerCool is that?

Let’s get back to our example. Using Paul’s code, let’s first create the table to store the data. I did some changes to store the date and time so you also can have the historical data if you want and the table is physical.

SELECT [database_id], [file_id], [num_of_reads], [io_stall_read_ms],

[num_of_writes], [io_stall_write_ms], [io_stall],

[num_of_bytes_read], [num_of_bytes_written], [file_handle]

INTO IOLatency

FROM sys.dm_io_virtual_file_stats(NULL, NULL);

GO

alter table IOLatency add [TimeCapture] datetime default getdate()

alter table IOLatency add [Identifier] bigint

 

I added this column identifier because my TSQL it is not so great and I want (like Paul’s example) to get the difference between the last top samples between 30 minutes, so this column will have the same value for the last two gatherings and I can easily create the TSQL condition for that.

Now let’s play with PowerShell: We need to import the module SQLPS to use the invoke-sqlcmd cmdlet . Also in the Action parameter of the Register-Objectevent cmdlet I am using a scriptblock variable to be more friendly readable. Also I am using the interval as Paul, 30 minutes, so the difference always will be in the 2 last same date times with 30 minutes of difference.

 

$Timer=New-Object System.Timers.Timer -Property @{ Interval = 1800000 ;autoreset=$true } #1800000 30 minutes in milliseconds

#Create the ScriptBlock variable to run in the action parameter in the register-objectevent cmdlet

$Action= {

#Import the SQLPS Module

Import-Module SQLPS -ErrorAction SilentlyContinue -DisableNameChecking;

#variable to counter the numb er of the events

$Script:counter+= 1;

#if the events is more than 2 , increase the the identity

if ($counter-eq 3) {

$script:Identity+= 1;

$counter= 1

}

#Define the TSQL

$Tsql =@”

insert into IOLatency ([database_id], [file_id], [num_of_reads], [io_stall_read_ms],

[num_of_writes], [io_stall_write_ms], [io_stall],

[num_of_bytes_read], [num_of_bytes_written], [file_handle],[Identifier])

SELECT [database_id],[file_id], [num_of_reads], [io_stall_read_ms],

[num_of_writes], [io_stall_write_ms], [io_stall],

[num_of_bytes_read], [num_of_bytes_written], [file_handle],$($Identity)

FROM sys.dm_io_virtual_file_stats (NULL, NULL);

“@;

#Invoke the TSQL

invoke-sqlcmd -ServerInstance vader -Database Test -Query$Tsql

}

Register-ObjectEvent-InputObject$timer  -Action  $Action  -EventNameelapsed  -SourceIdentifierstateful

Not it is juts start ….

$Timer.Start()

And every 30 minutes the table IOLatency will be populated. If you want to stop the gathering but not unregistered the event :

$Timer.Stop()

This way if you want to start again, just timer.start().

Now it is time to get the data. Using The Paul´s query  with a change to get the same identifier :

 

WITH [DiffLatencies] AS

(SELECT

– Files that weren’t in the first snapshot

[ts2].[database_id],

[ts2].[file_id],

[ts2].[Identifier],

[ts2].[num_of_reads],

[ts2].[io_stall_read_ms],

[ts2].[num_of_writes],

[ts2].[io_stall_write_ms],

[ts2].[io_stall],

[ts2].[num_of_bytes_read],

[ts2].[num_of_bytes_written]

FROM IOLatency AS [ts2]

LEFT OUTER JOIN IOLatency AS [ts1]

ON [ts2].[file_handle] = [ts1].[file_handle]

and ts2.Identifier = ts1.Identifier

WHERE [ts1].[file_handle] IS NULL

UNION

SELECT

– Diff of latencies in both snapshots

[ts2].[database_id],

[ts2].[file_id],

[ts2].[Identifier],

[ts2].[num_of_reads] – [ts1].[num_of_reads] AS [num_of_reads],

[ts2].[io_stall_read_ms] – [ts1].[io_stall_read_ms] AS [io_stall_read_ms],

[ts2].[num_of_writes] – [ts1].[num_of_writes] AS [num_of_writes],

[ts2].[io_stall_write_ms] – [ts1].[io_stall_write_ms] AS [io_stall_write_ms],

[ts2].[io_stall] – [ts1].[io_stall] AS [io_stall],

[ts2].[num_of_bytes_read] – [ts1].[num_of_bytes_read] AS [num_of_bytes_read],

[ts2].[num_of_bytes_written] – [ts1].[num_of_bytes_written] AS [num_of_bytes_written]

FROM IOLatency AS [ts2]

LEFT OUTER JOIN IOLatency AS [ts1]

ON [ts2].[file_handle] = [ts1].[file_handle]

and ts2.Identifier = ts1.Identifier

WHERE [ts1].[file_handle] IS NOT NULL)

SELECT

DB_NAME([vfs].[database_id]) AS [DB],

LEFT([mf].[physical_name], 2) AS [Drive],

[mf].[type_desc],

[num_of_reads] AS [Reads],

[num_of_writes] AS [Writes],

[ReadLatency(ms)] =

CASE WHEN [num_of_reads] = 0

THEN 0 ELSE ([io_stall_read_ms] / [num_of_reads]) END,

[WriteLatency(ms)] =

CASE WHEN [num_of_writes] = 0

THEN 0 ELSE ([io_stall_write_ms] / [num_of_writes]) END,

/*[Latency] =

CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0)

THEN 0 ELSE ([io_stall] / ([num_of_reads] + [num_of_writes])) END,*/

[AvgBPerRead] =

CASE WHEN [num_of_reads] = 0

THEN 0 ELSE ([num_of_bytes_read] / [num_of_reads]) END,

[AvgBPerWrite] =

CASE WHEN [num_of_writes] = 0

THEN 0 ELSE ([num_of_bytes_written] / [num_of_writes]) END,

/*[AvgBPerTransfer] =

CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0)

THEN 0 ELSE

(([num_of_bytes_read] + [num_of_bytes_written]) /

([num_of_reads] + [num_of_writes])) END,*/

[mf].[physical_name],

[vfs].identifier

FROM [DiffLatencies] AS [vfs]

JOIN sys.master_files AS [mf]

ON [vfs].[database_id] = [mf].[database_id]

AND [vfs].[file_id] = [mf].[file_id]

– ORDER BY [ReadLatency(ms)] DESC

ORDER BY [Identifier],[WriteLatency(ms)] DESC;

GO

 

If you want to to get only one sample , just add a

ON [vfs].[database_id] = [mf].[database_id]

AND [vfs].[file_id] = [mf].[file_id]

AND [vfs].identifier = <IdentifierNumber>

 

This way you also can know what time was made this gathering, just checking the timecapture column in the identifier number. In other words, you also have a historical data that you can query and get the differences or make any calculations that you need.

The data is stored—you can play with whatever query you want and need.

If you are thinking of gathering perfmon counters, it is possible too. Take a look at my blog post on Simple-Talk, The PoSh DBA – Specifying and Gathering Performance Counters and just play with the timer class.

I guess that you realize you can use this approach in any scenario that you need to capture data in a busy system without using any kind (or a minimum)  of SQL Server resources. IMHO, that can be a good approach in those kind environments

Again, I would like to thank Paul Randal for kindly allowing me to use and modify to use his code and blog post to demonstrate my approach, and of course the Hey, Scripting Guy! blog.

 

Remember kids… If it is PowerCool, it is PowerShell!

Categories: DBA Blogs

Android Update: 5.0

Dietrich Schroff - Tue, 2014-11-18 14:27
Today my Nexus 7 got the upgrade to Android 5.0:
 After this upgrade, many things changed, like the system settings:



But everything is slower than before.... ;-(

For a complete history of all updates visit this posting.


Gilbane Conference 2014 - Boston

WebCenter Team - Tue, 2014-11-18 11:36

Oracle is proud to be a sponsor of the 2014 Gilbane Content and Digital Experience Conference.  The Gilbane Conference brings together industry experts, content managers, marketers, marketing technologists, technology and executive strategists to share experiences and explore the most effective technologies and approaches to help enterprises build agile, sustainable digital experiences.

Event Details: Oracle Sponsors Gilbane 2014: Content and the Digital Experience Dec. 2nd – 4th  Renaissance Boston Waterfront Hotel 606 Congress Street, Boston, MA 02210  Show Registration | Evite 
Oracle Presence: 
  • Oracle Booth: Oracle WebCenter: Creating Next-Gen Digital Experiences
  • Oracle Presentation & Product Lab: Creating Next-Gen Digital Experiences - Wednesday, December 3, 12:40 p.m. – 1:25 p.m.
  • Presented by: Chris Preston, Senior Director, Oracle & Gary Matsell, Principal Sales Consultant WebCenter, Oracle
  • Attendees will learn how to:
    • Attract and engage customers with relevant, interactive, and intuitive experiences
    • Deliver self-service experiences that are personalized, integrated, and secure
    • Engage and convert customers with consistent, contextual, and multi-channel experiences
Are you attending the Gilbane Conference? We would love to connect with you to talk about your Digital Experience & Engagement initiatives!

Better Ed Tech Conversations

Michael Feldstein - Tue, 2014-11-18 09:44

This is another follow-up to the comments thread on my recent LMS rant. As usual, Kate Bowles has insightful and empathetic comments:

…From my experience inside two RFPs, I think faculty can often seem like pretty raucous bus passengers (especially at vendor demo time) but in reality the bus is driven by whoever’s managing the RFP, to a managed timetable, and it’s pretty tightly regulated. These constraints are really poorly understood and lead to exactly the predictable and conservative outcomes you observe. Nothing about the process favours rethinking what we do.

Take your focus on the gradebook, which I think is spot on: the key is how simply I can pull grades in, and from where. The LMS we use is the one with the truly awful, awful gradebook. Awful user experience, awful design issues, huge faculty development curve even to use it to a level of basic competence.

The result across the institution is hostility to making online submission of assignments the default setting, as overworked faculty look at this gradebook and think: nope.

So beyond the choosing practice, we have the implementation process. And nothing about this changes the mind of actual user colleagues. So then the institutional business owner group notices underuse of particular features—oh hey, like online submission of assignments—and they say to themselves: well, we need a policy to make them do it. Awfulness is now compounding.

But then a thing happens. Over the next few years, faculty surreptitiously develop a workable relationship with their new LMS, including its mandated must-use features. They learn how to do stuff, how to tweak and stretch and actually enjoy a bit. And that’s why when checklist time comes around again, they plead to have their favourite corner left alone. They only just figured it out, truly.

If institutions really want to do good things online, they need to fund their investigative and staff development processes properly and continuously, so that when faculty finally meet vendors, all can have a serious conversation together about purpose, before looking at fit.

This comment stimulated a fair bit of conversation, some of which continued on the comments thread of Jonathan Rees’ reply to my post.

The bottom line is that there is a vicious cycle. Faculty, who are already stretched to the limit (and beyond) with their workloads, are brought into a technology selection process that tends to be very tactical and time-constrained. Their response, understandably, tends to be to ask for things that will require less time from them (like an easier grade book, for example). When administrators see that they are not getting deep and broad adoption, they tend to mandate technology use. Which makes the problem worse rather than better because now faculty are forced to use features that take up more of their time without providing value, leaving them with less time to investigate alternatives that might actually add value. Round and round it goes. Nobody stops and asks, “Hey, do we really need this thing? What is it that we do need, and what is the most sensible way of meeting our needs?”

The only way out of this is cultural change. Faculty and administrators alike have work together toward establishing some first principles around which problems the technology is supposed to help them solve and what a good solution would look like. This entails investing time and university money in faculty professional development, so that they can learn what their options are and what they can ask for. It entails rewarding faculty for their participation in the scholarship of teaching. And it entails faculty seeing educational technology selection and policy as something that is directly connected to their core concerns as both educational professionals and workers.

Sucky technology won’t fix itself, and vendors won’t offer better solutions if customers can’t define “better” for them. Nor will open source projects fare better. Educational technology only improves to the extent that educators develop a working consensus regarding what they want. The technology is a second-order effect of the community. And by “community,” I mean the group that collectively has input on technology adoption decisions. I mean the campus community.

The post Better Ed Tech Conversations appeared first on e-Literate.

Partner Webcast – Business Continuity with Oracle Weblogic 12c

Business Continuity is the vast important feature of the modern enterprises and organizations. Modern IT infrastructure should meet strong objectives and requirements in order to continue to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

12c: Enhancements to Partition Exchange Load

Oracle in Action - Tue, 2014-11-18 04:43

RSS content

Statistics for Partitioned Tables
Gathering statistics on partitioned tables consists of gathering statistics at both the table level and partition level. Prior to Oracle Database 11g, whenever a new partition was added, the entire table had to be scanned to refresh table-level statistics which could be very expensive, depending on the size of the table.

Incremental Global Statistics
With the introduction of incremental global statistics in 11g, the database, instead of performing a full table scan to compute global statistics, can derive global statistics from the partition level statistics. Some of the statistics, for example the number of rows, can be accurately derived by aggregating the values from partition statistics . However, the NDV of a column cannot be derived by aggregating partition-level NDVs. Hence, a structure called synopsis is maintained by the database for each column at the partition level which can be viewed as a sample of distinct values. The synopses for various partitions are merged by the database to accurately derive the NDV for each column.

Hence, when a new partition is added to a table, the database

  • gathers statistics and creates synopses for the newly added partition,
  • retrieves synopses for the existing partitions of the table and
  • aggregates the partition-level statistics and synopses to create global statistics.

Thus, the need to scan the entire table to gather table level statistics on adding a new partition has been eliminated.

However, if partition exchange loads are performed and statistics for source table are available, statistics still need to be gathered for the partition after the exchange to obtain its synopsis.

Enhancements in Oracle 12c
Oracle Database 12c introduces new enhancements for maintaining incremental statistics. Now, DBMS_STATS can create a synopsis on a non-partitioned table as well. As a result, if you are using partition exchange loads, the statistics / synopsis for the source table will become the partition level statistics / synopsis after the load, so that the database can maintain incremental statistics without having to explicitly gather statistics on the partition after the exchange.

Let’s demonstrate …

Overview:

Source non-partitioned table : HR.SRC_TAB
Destination partitioned table: HR.PART_TAB
Destination partition                  : PMAR

– Create a partitioned table HR.PART_TAB with 3 partitions

  • only 2 partitions contain data initially
  • set preference incremental = true
  • gather stats for the table – gathers statistics and synopses for 2 partitions

– create a non partitioned table HR.SRC_TAB which will used to load the 3rd partition using partition exchange

  •  Set table preferences for HR.SRC_TAB
    • INCREMENTAL = TRUE
    • INCREMENTAL_LEVEL = TABL
  • Gather stats for the source table: DBMS_STATS gathers table-level synopses also for the table

– Perform the partition exchange
– After the exchange, the the new partition has both statistics and a synopsis.
– Gather statitstics for PART_TAB – Employs partition level statistics and synopses to derive global statistics.

Implementation

– Create and populate partitioned table part_tab with 3 partitions
PJAN, PFEB and PMAR

SQL>conn hr/hr

drop table part_tab purge;
create table part_tab
(MNTH char(3),
ID number,
txt char(10))
partition by list (mnth)
(partition PJAN values ('JAN'),
partition PFEB values ('FEB'),
partition PMAR values ('MAR'));

insert into part_tab values ('JAN', 1, 'JAN1');
insert into part_tab values ('JAN', 2, 'JAN2');
insert into part_tab values ('JAN', 3, 'JAN3');

insert into part_tab values ('FEB', 2, 'FEB2');
insert into part_tab values ('FEB', 3, 'FEB3');
insert into part_tab values ('FEB', 4, 'FEB4');
commit;

– Note that

  •   partition PMAR does not have any data
  •  there are 4 distinct values in column ID i.e. 1,2,3 and 4
select 'PJAN' Partition, mnth, id from part_tab partition (PJAN)
union
select 'PFEB' Partition, mnth, id from part_tab partition (PFEB)
union
select 'PMAR' Partition, mnth, id from part_tab partition (PMAR)
order by 1 desc;

PART MNT ID
---- --- ----------
PJAN JAN 1
PJAN JAN 2
PJAN JAN 3
PFEB FEB 2
PFEB FEB 3
PFEB FEB 4

– Set preference Incremental to true for the table part_tab

SQL>begin
dbms_stats.set_table_prefs ('HR','PART_TAB','INCREMENTAL','TRUE');
end;
/

select dbms_stats.get_prefs ('INCREMENTAL','HR','PART_TAB') from dual;

DBMS_STATS.GET_PREFS('INCREMENTAL','HR','PART_TAB')
----------------------------------------------------
TRUE

-- Gather statistcs for part_tab

SQL> exec dbms_stats.gather_table_stats('HR','PART_TAB');

– Note that global statistics have been gathered and the table has been analyzed at 16:02:31

SQL>alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';

col table_name for a12
select table_name, num_rows, last_analyzed from user_tables
where table_name='PART_TAB';

TABLE_NAME NUM_ROWS LAST_ANALYZED
------------ ---------- --------------------
PART_TAB 6 17-nov-2014 16:02:31

– A full table scan was performed and stats were gathered for each of the partitions
All the partitions have been analyzed at the same time as table i.e. at 16:02:31

SQL> col partition_name for a15

select partition_name, num_rows,last_analyzed
from user_tab_partitions
where table_name = 'PART_TAB' order by partition_position;

PARTITION_NAME NUM_ROWS LAST_ANALYZED
--------------- ---------- --------------------
PJAN 3 17-nov-2014 16:02:31
PFEB 3 17-nov-2014 16:02:31
PMAR 0 17-nov-2014 16:02:31

– NUM_DISTINCT correctly reflects that there are 4 distinct values in column ID

SQL> col column_name for a15
select TABLE_NAME, COLUMN_NAME, NUM_DISTINCT
from user_tab_col_statistics
where table_name = 'PART_TAB' and column_name = 'ID';

TABLE_NAME COLUMN_NAME NUM_DISTINCT
------------ --------------- ------------
PART_TAB ID 4

– Create source unpartitioned table SRC_TAB
– Populate SRC_TAB with records for mnth = MAR
and introduce two new values for column ID i.e. 0 and 5

SQL>drop table src_tab purge;
create table src_tab
(MNTH char(3),
ID number,
txt char(10));

insert into src_tab values ('MAR', 0, 'MAR0');
insert into src_tab values ('MAR', 2, 'MAR2');
insert into src_tab values ('MAR', 3, 'MAR3');
insert into src_tab values ('MAR', 5, 'MAR5');
commit;

– Set preferences for table src_tab

  • INCREMENTAL = TRUE
  • INCREMENTAL_LEVEL = TABLE
SQL>begin
dbms_stats.set_table_prefs ('HR','SRC_TAB','INCREMENTAL','TRUE');
dbms_stats.set_table_prefs ('HR','SRC_TAB','INCREMENTAL_LEVEL','TABLE');

end;
/

col incremental for a15
col incremental_level for a30

select dbms_stats.get_prefs ('INCREMENTAL','HR','SRC_TAB') incremental,
dbms_stats.get_prefs ('INCREMENTAL_LEVEL','HR','SRC_TAB') incremental_level
from dual;

INCREMENTAL INCREMENTAL_LEVEL
--------------- ------------------------------
TRUE TABLE

– Gather stats and synopsis for table SRC_TAB and note that table is analyzed at 16:06:03

SQL>exec dbms_stats.gather_table_stats('HR','SRC_TAB');

col table_name for a12
select table_name,num_rows, last_analyzed from user_tables
where table_name='SRC_TAB';

TABLE_NAME NUM_ROWS LAST_ANALYZED
------------ ---------- --------------------
SRC_TAB 4 17-nov-2014 16:06:33

– Exchange partition –

SQL>alter table part_tab exchange partition PMAR with table SRC_TAB;

– Note that table level stats for part_tab are still as earlier
as stats have not been gathered for it after partition exchange

SQL> col table_name for a12
select table_name, num_rows, last_analyzed from user_tables
where table_name='PART_TAB';

TABLE_NAME NUM_ROWS LAST_ANALYZED
------------ ---------- --------------------
PART_TAB 6 17-nov-2014 16:02:31

– NDV for col ID is still same as earlier i.e. 4 as stats
have not been gathered for table after partition exchange

SQL> col column_name for a15
select TABLE_NAME, COLUMN_NAME, NUM_DISTINCT
from user_tab_col_statistics
where table_name = 'PART_TAB' and column_name = 'ID';

TABLE_NAME COLUMN_NAME NUM_DISTINCT
------------ --------------- ------------
PART_TAB ID 4

– Note that stats for partition PMAR have been copied from
src_tab. Last_analyzed column for Pmar has been updated
and shows same value as for table src_tab i.e. 16:06:33
Also, num_rows are shown as 4

SQL> col partition_name for a15

select partition_name, num_rows,last_analyzed
from user_tab_partitions
where table_name = 'PART_TAB' order by partition_position;

PARTITION_NAME NUM_ROWS LAST_ANALYZED
--------------- ---------- --------------------
PJAN 3 17-nov-2014 16:02:31
PFEB 3 17-nov-2014 16:02:31
PMAR 4 17-nov-2014 16:06:33

– Gather stats for table part_tab

SQL>exec dbms_stats.gather_table_stats('HR','PART_TAB');

– While gathering stats for the table, partitions have not been
scanned as indicated by the same value as earlier in column LAST_ANALYZED.

SQL> col partition_name for a15

select partition_name, num_rows,last_analyzed
from user_tab_partitions
where table_name = 'PART_TAB' order by partition_position;

PARTITION_NAME NUM_ROWS LAST_ANALYZED
--------------- ---------- --------------------
PJAN 3 17-nov-2014 16:02:31
PFEB 3 17-nov-2014 16:02:31
PMAR 4 17-nov-2014 16:06:33

– Note that num_rows for the table part_tab has been updated by adding up the values from various partitions using partition level statistics
Column LAST_ANALYZED has been updated for the table

SQL> col table_name for a12
select table_name, num_rows, last_analyzed from user_tables
where table_name='PART_TAB';

TABLE_NAME NUM_ROWS LAST_ANALYZED
------------ ---------- --------------------
PART_TAB 10 17-nov-2014 16:11:26

– NDV for column ID has been updated to 6 using the synopsis for partition PMAR as copied from table src_tab

SQL> col column_name for a15
select TABLE_NAME, COLUMN_NAME, NUM_DISTINCT
from user_tab_col_statistics
where table_name = 'PART_TAB' and column_name = 'ID';

TABLE_NAME COLUMN_NAME NUM_DISTINCT
------------ --------------- ------------
PART_TAB ID 6

– We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the PART_TAB table.

SQL>conn / as sysdba
col tabname for a15
col colname for a15
col incremental for a15

select o.name Tabname , c.name colname,
decode (bitand (h.spare2, 8), 8, 'yes','no') incremental
from sys.hist_head$ h, sys.obj$ o, sys.col$ c
where h.obj# = o.obj#
and o.obj# = c.obj#
and h.intcol# = c.intcol#
and o.name = 'PART_TAB'
and o.subname is null;

TABNAME COLNAME INCREMENTAL
--------------- --------------- ---------------
PART_TAB MNTH yes
PART_TAB ID yes
PART_TAB TXT yes

I hope this post was useful.

Your comments and suggestions are always welcome.

References:

http://oracle-randolf.blogspot.in/2012/01/incremental-partition-statistics-review.html
https://docs.oracle.com/database/121/TGSQL/tgsql_stats.htm#TGSQL413
https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics
http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-statistics-concepts-12c-1963871.pdf
http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-with-oracledb-12c-1963236.pdf
http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-bp-for-stats-gather-12c-1967354.pdf
https://blogs.oracle.com/optimizer/entry/maintaining_statistics_on_large_partitioned_tables
———————————————————————————-

Related Links:

Home

Database 12c Index

===================================================================

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [12c: Enhancements to Partition Exchange Load], All Right Reserved. 2014.

The post 12c: Enhancements to Partition Exchange Load appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

My DOAG session re:Server-side JavaScript

Kuassi Mensah - Tue, 2014-11-18 04:31
#DOAG Wed 19/11 17:00 rm HongKong Server-side #JavaScript (#NodeJS) progrm#OracleDB using #nashorn & Avatar.js --#db12c @OracleDBDev #java

Will post shortly ablog re: JavaScript Stored Procedures. 

Configuring Python cx_Oracle and mod_wsgi on Oracle Linux

Christopher Jones - Mon, 2014-11-17 23:16

The Web Server Gateway Interface (WSGI) is a standardized interface between web servers and Python web frameworks or applications. Many frameworks including Django support WSGI.

This post is a brief how-to about configuring Apache's mod_wsgi with Python's cx_Oracle driver for Oracle Database. The steps are for Oracle Linux.

  1. Download Instant Client Basic & SDK ZIP files from OTN. For cx_Oracle, use the ZIPs, not the RPMs.

  2. As root, unzip the files to the same directory, e.g. /opt/oracle/instantclient_12_1:

    mkdir /opt/oracle
    cd /opt/oracle
    unzip /tmp/instantclient-basic-linux.x64-12.1.0.2.0.zip
    unzip /tmp/instantclient-sdk-linux.x64-12.1.0.2.0.zip
    
  3. Configure Instant Client:

    cd /opt/oracle/instantclient_12_1
    ln -s libclntsh.so.12.1 libclntsh.so
    
  4. Install the pip package management tool for Python by following pip.readthedocs.org/en/latest/installing.html and downloading get-pip.py. Then run:

    python get-pip.py
    
  5. Install cx_Oracle:

    export LD_RUN_PATH=/opt/oracle/instantclient_12_1
    export ORACLE_HOME=/opt/oracle/instantclient_12_1
    pip install cx_Oracle
    

    The key here is the use of LD_RUN_PATH. This obviates the need to later set LD_LIBRARY_PATH or configure ldconfig for cx_Oracle to find the Instant Client libraries. On Oracle Linux 7, which has Apache 2.4, setting LD_LIBRARY_PATH isn't practical since suEXEC is used. Configuring ldconfig is commonly used but has a potential problem that if multiple Oracle products exist, with possibly differing versions of Oracle libraries on the same machine, then there might be library clashes.

    The cx_Oracle installer overloads the meaning of ORACLE_HOME. This variable is not normally used for Instant Client installations.

    Neither ORACLE_HOME or LD_RUN_PATH needs to be set at runtime.

  6. Install mod_wsgi:

      yum install mod_wsgi
    
  7. Add this line to /etc/httpd/conf/httpd.conf:

       WSGIScriptAlias /wsgi_test /var/www/html/wsgi_test.py
    
  8. On Oracle Linux 7, start the web server with:

      systemctl start httpd.service
    

    On Oracle Linux 6 use:

      service httpd start
    
  9. Create a test file /var/www/html/wsgi_test.py that connects to your database:

        #-*- coding: utf-8 -*-
    
        def query():
    	import cx_Oracle
    	db = cx_Oracle.connect("hr", "welcome", "localhost/orcl")
    	cursor = db.cursor()
    	cursor.execute("select city from locations where location_id = 2200")
    	return cursor.fetchone()[0]
    
        def wsgi_test(environ, start_response):
    	output = query()
    
    	status = '200 OK'
    	headers = [('Content-type', 'text/plain'),
    		   ('Content-Length', str(len(output)))]
    	start_response(status, headers)
    	yield output
    
        application = wsgi_test
    
  10. Load http://localhost/wsgi_test in a browser. The city of the queried location id will be displayed.

That's it. Let me know how it works for you.

Information on cx_Oracle can be found here.

Information on Oracle Linux can be found here.

Information on Oracle Database can be found here.

Off May Not Be Totally Off: Is Oracle In-Memory Database 12c (12.1.0.2.0) Faster?

Off May Not Be Totally Off: Is Oracle In-Memory Database 12c (12.1.0.2.0) Faster?
Most Oracle 12c installations will NOT be using the awesome Oracle Database in-memory features available starting in version 12.1.0.2.0. This experiment is about the performance impact of upgrading to 12c but disabling the in-memory features.

Every experiment I have performed comparing buffer processing rates, clearly shows any version of 12c performs better than 11g. However, in my previous post, my experiment clearly showed a performance decrease after upgrading from 12.1.0.1.0 to 12.1.0.2.0.

This posting is about why this occurred and what to do about it. The bottom line is this: make sure "off" is "totally off."

Turn it totally off, not partially off
What I discovered is by default the in-memory column store feature is not "totally disabled." My experiment clearly indicates that unless the DBA takes action, not only could they be a license agreement violation but a partially disabled in-memory column store slightly slows logical IO processing compared to the 12c non in-memory column store option. Still, any 12c version processes buffer faster than 11g.

My experiment: specific and targeted
This is important: The results I published are based on a very specific and targeted test and not on a real production load. Do not use my results in making a "should I upgrade decision." That would be stupid and an inappropriate use of the my experimental results. But because I publish every aspect of my experiment and it is easily reproducible it is a valid data point with which to have a discussion and also highlight various situations that DBAs need to know about.

You can download all my experimental results HERE. This includes the raw sqlplus output, the data values, the free R statistics package commands, spreadsheet with data nicely formatted and lots of histograms.

The instance parameter settings and results
Let me explain this by first showing the instance parameters and then the experimental results. There are some good lessons to learn!

Pay close attention to the inmemory_force and inmemory_size instance parameters.

SQL> show parameter inmemory

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default string
inmemory_force string DEFAULT
inmemory_max_populate_servers integer 0
inmemory_query string ENABLE
inmemory_size big integer 0
inmemory_trickle_repopulate_servers_ integer 1
percent
optimizer_inmemory_aware boolean TRUE

SQL> show sga

Total System Global Area 7600078848 bytes
Fixed Size 3728544 bytes
Variable Size 1409289056 bytes
Database Buffers 6174015488 bytes
Redo Buffers 13045760 bytes

In my experiment using the above settings the median buffers processing rate was 549.4 LIO/ms. Looking at the inmemory_size and the SGA contents, I assumed the in-memory column store was disabled. If you look at the actual experimental result file "Full ds2-v12-1-0-2-ON.txt", which contain the explain plan of the SQL used in the experiment, there is no mention of the in-memory column store being used. My assumption, which I think is a fair one, was that the in-memory column store had been disabled.

As you'll see I was correct, but only partially correct.

The parameter settings below are when the in-memory column store was totally disabled. They key is changing the default inmemory_force parameter value from DEFAULT to OFF.

SQL> show parameter inmemory

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default string
inmemory_force string OFF
inmemory_max_populate_servers integer 0
inmemory_query string ENABLE
inmemory_size big integer 0
inmemory_trickle_repopulate_servers_ integer 1
percent
optimizer_inmemory_aware boolean TRUE
SQL> show sga

Total System Global Area 7600078848 bytes
Fixed Size 3728544 bytes
Variable Size 1291848544 bytes
Database Buffers 6291456000 bytes
Redo Buffers 13045760 bytes

Again, the SGA does not show any in-memory memory space. In my experiment with the above "totally off" settings, the median buffers processing rate was 573.5 LIO/ms compared to "partially off" 549.4 LIO/ms. Lesson: Make sure off is truly off.

It is an unfair comparison!
It is not fair to compare the "partially off" with the "totally off" test results. Now that I know the default inmemory_force must be changed to OFF, the real comparison should be made with the non in-memory column store version 12.1.0.1.0 and the "totally disabled" in-memory column store version 12.1.0.2.0. This is what I will summarize below. And don't forget all 12c versions showed a significant buffer processing increase compared to 11g.

The key question: Should I upgrade?
You may be thinking, if I'm NOT going to license and use the in-memory column store, should I upgrade to version 12.1.0.2.0? Below is a summary of my experimental results followed by the key points.

1. The non column store version 12.1.0.1.0 was able to process 1.1% more buffers/ms (median: 581.7 vs 573.5) compared to to "totally disabled" in-memory column store version 12.1.0.2.0. While this is statistically significant, a 1.1% buffer processing difference is probably not going to make-or-break your upgrade.

2. Oracle Corporation, I'm told, knows about this situation and is working on a fix. But even if they don't fix it, in my opinion my experimental "data point" would not warrant not upgrading to the in-memory column store version 12.1.0.2.0 even if you are NOT going to use the in-memory features.

3. Visually (see below) the non in-memory version 12.1.0.1.0 and the "totally off" in-memory version 12.1.0.2.0 samples sets look different. But they are pretty close. And as I mentioned above, statistically they are "different."

Note for the statistically curious: The red color 12.1.0.1.0 non in-memory version data set is highly variable. I don't like to see this in my experiments. Usually this occurs when a mixed workload sometimes impacts performance, I don't take enough samples or my sample time duration is too short. To counteract this, in this experiment I captured 31 samples. I also performed the experiment multiple times and the results where similar. What I could have done was used more application data to increase the sample duration time. Perhaps that would have made the data clearer. I could have also used another SQL statement and method to create the logical IO load.
What I learned from this experiment
To summarize this experiment, four things come to mind:

1. If you are not using an Oracle Database feature, completely disable it. My mistake was thinking the in-memory column store was disabled when I set it's memory size to zero and "confirmed" it was off by looking at the SGA contents.

2. All versions of 12c I have tested are clearly faster at processing buffers than any version of 11g.

3. There is a very slight performance decrease when upgrading from Oracle Database version 12.1.0.1.0 to 12.1.0.2.0.

4. It is amazing to me that with all the new features poured into each new Oracle Database version the developers have been able to keep the core buffer processing rate nearly at or below the previous version. That is an incredible accomplishment. While some people may view this posting as a negative hit against the Oracle Database, it is actually a confirmation about how awesome the product is.

All the best in your Oracle performance tuning work!

Craig.




Categories: DBA Blogs

Upgrading system's library/classes on 12c CDB/PDB environments

Marcelo Ochoa - Mon, 2014-11-17 17:41
Some days ago I found that the ODCI.jar included into 12c doesn't reflect latest update for oracle ODCI API.This API is used when writing new domain indexes such as Scotas OLS, pipe-line tables and many other cool stuff.ODCI.jar includes several Java classes which are wrappers of Oracle Object types such as ODCIArgDesc among others, the jar included into the RDBMS 11g/12c seem to be outdated, may be generated with 10g version database, for example it doesn't included attributes such as ODCICompQueryInfo which have information about Composite Domain Index (filter by/order by push predicates).The content of ODCI.jar is a set of classes generated by the tool JPublisher and looks like:oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ jar tvf ODCI.jar
     0 Mon Jul 07 09:12:54 ART 2014 META-INF/
    71 Mon Jul 07 09:12:54 ART 2014 META-INF/MANIFEST.MF
  3501 Mon Jul 07 09:12:30 ART 2014 oracle/ODCI/ODCIArgDesc.class
  3339 Mon Jul 07 09:12:32 ART 2014 oracle/ODCI/ODCIArgDescList.class
  1725 Mon Jul 07 09:12:32 ART 2014 oracle/ODCI/ODCIArgDescRef.class
....
  2743 Mon Jul 07 09:12:52 ART 2014 oracle/ODCI/ODCIStatsOptions.class
  1770 Mon Jul 07 09:12:54 ART 2014 oracle/ODCI/ODCIStatsOptionsRef.classThe complete list of classes do not reflect the list of object types that latest 12c RDBMS have, this list is about 38 types expanded later to more than 60 classes:SQL> select * from dba_types where type_name like 'ODCI%'
SYS     ODCIARGDESC
SYS     ODCIARGDESCLIST
....
SYS     ODCIVARCHAR2LIST
38 rows selectedso there is a clear difference between the classes included at ODCI.jar and the actual list of object types included into the RDBMS.Obviously these classes could be re-generated using JPublisher but I'll have to provide an input file with a template for case sensitive names, typically used in Java.To quickly create a JPublisher input file I'll execute this anonymous PLSQL block on JDeveloper logged as SYS at the CDB:set long 10000 lines 500 pages 50 timing on echo on
set serveroutput on size 1000000
begin
 for i in (select * from dba_types where type_name like 'ODCI%' order by type_name) loop
   if (i.typecode = 'COLLECTION') then
      dbms_output.put('SQL sys.'||i.type_name||' AS ');
      FOR j in (select * from dba_source where owner=i.owner AND NAME=i.type_name) loop
         if (substr(j.text,1,4) = 'TYPE') then
            dbms_output.put(substr(j.text,6,length(j.name))||' TRANSLATE ');
         else
            dbms_output.put(upper(substr(j.text,instr(upper(j.text),' OF ')+4,length(j.text)-instr(upper(j.text),' OF ')-4))||' AS '||substr(j.text,instr(upper(j.text),' OF ')+4,length(j.text)-instr(upper(j.text),' OF ')-4));
         end if;
      end loop;
      dbms_output.put_line('');
   else
      dbms_output.put('SQL sys.'||i.type_name||' AS ');
      FOR j in (select * from dba_source where owner=i.owner AND NAME=i.type_name) loop
         if (substr(j.text,1,4) = 'TYPE') then
            dbms_output.put(substr(j.text,6,length(j.name))||' TRANSLATE ');
         end if;
         if (substr(j.text,1,1) = ' ') then
            dbms_output.put(upper(substr(j.text,3,instr(j.text,' ',3)-3))||' AS '||substr(j.text,3,instr(j.text,' ',3)-3)||', ');
         end if;
      end loop;
      dbms_output.put_line('');
   end if;
 end loop;
end;finally editing this file manually to remove latest coma sign I'll get this ODCI.in mapping file for JPublisher.With above file is possible to use an Ant task calling JPublisher utiliy as:            description="Generate a new ODCI.jar file with ODCI types wrappers using JPublisher">
          login="${dba.usr}/${dba.pwd}@${db.str}"
      dir="tmp"
      package="oracle.ODCI"
      file="../db/ODCI.in"/>
           basedir="tmp"
       includes="**/*.class"
     />
  by executing above Ant task I'll have a new ODCI.jar with a content like:oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ jar tvf ODCI.jar
     0 Sun Nov 16 21:07:50 ART 2014 META-INF/
   106 Sun Nov 16 21:07:48 ART 2014 META-INF/MANIFEST.MF
     0 Sat Nov 15 15:17:40 ART 2014 oracle/
     0 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/
102696 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyData.class
  1993 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyDataRef.class
 17435 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyType.class
  1993 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/AnyTypeRef.class
  3347 Sun Nov 16 21:07:46 ART 2014 oracle/ODCI/ODCIArgDesc.class
  2814 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCIArgDescList.class
  2033 Sun Nov 16 21:07:46 ART 2014 oracle/ODCI/ODCIArgDescRef.class
.....
  2083 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCITabFuncStatsRef.class
  2657 Sun Nov 16 21:07:48 ART 2014 oracle/ODCI/ODCIVarchar2List.classWell now the new ODCI.jar is ready for uploading into the CDB, to simplify this task I'll put directly in a same directory as the original one:oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ mv ODCI.jar ODCI.jar.orig
oracle@localhost:/u01/app/oracle/product/12.1.0.2.0/dbhome_1/rdbms/jlib$ mv /tmp/ODCI.jar ./ODCI.jarNOTE: These next paragraph are examples to show that it will fail, see next paragraph to see the correct way.
To upload this new file into the CDB logged as SYS I'll execute:SQL> ALTER SESSION SET CONTAINER = CDB$ROOT;
SQL> exec sys.dbms_java.loadjava('-f -r -v -s -g public rdbms/jlib/ODCI.jar');to check if it works OK, I'll execute:SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
DBMS_JAVA.LONGNAME(OBJECT_NAME)
--------------------------------------------------------------------------------
oracle/ODCI/ODCIArgDesc
oracle/ODCI/ODCIArgDescList
oracle/ODCI/ODCIArgDescRef
...
oracle/ODCI/ODCITabFuncStats
oracle/ODCI/ODCITabFuncStatsRef
oracle/ODCI/ODCIVarchar2List
63 rows selected.I assume a this point that a new jar uploaded into the CDB root means that all PDB will inherit this new implementation as a new binary/library file patched at ORACLE_HOME does, but this is not how the class loading system works into the multitenant environment, to check that I'll re-execute above query but using the PDB$SEED container (the template used for new databases):SQL> ALTER SESSION SET CONTAINER = PDB$SEED;
SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
...
28 rows selected.similar result will be displayed in any other PDB running/mounted on that CDB, more on this if I'll check a Java code on some PDB this exception will be thrown:Exception in thread "Root Thread" java.lang.IncompatibleClassChangeError
 at oracle.jpub.runtime.MutableArray.getOracleArray(MutableArray.java)
 at oracle.jpub.runtime.MutableArray.getObjectArray(MutableArray.java)
 at oracle.jpub.runtime.MutableArray.getObjectArray(MutableArray.java)
 at oracle.ODCI.ODCIColInfoList.getArray(ODCIColInfoList.java)
 at com.scotas.solr.odci.SolrDomainIndex.ODCIIndexCreate(SolrDomainIndex.java:366)this is because a code was compiled with latest API and the container have an oldest one.So I'll re-load the new ODCI.jar into PDB$SEED and my PDBs, using similar approach as in the CDB for example:SQL> ALTER SESSION SET CONTAINER = PDB$SEED;
SQL> exec sys.dbms_java.loadjava('-f -r -v -s -g public rdbms/jlib/ODCI.jar');
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable databasethis is because the PDB are blocked from altering classes inherit from the CDB.
As I mentioned early above way are incorrect when dealing in multitenant environments.To fix that there is Perl script named catcon.pl, it automatically takes care of loading on ROOT first, then on PDB$SEED, then any/all open PDBs specified in the command line.In my case I'll execute:# $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS -d $ORACLE_HOME/rdbms/admin -b initsoxx_output initsoxx.sqlbefore doing that is necessary to open all PDB (read write, or in restrict mode) or specifying which PDB will be patched. Note that I used initsoxx.sql script, this script is used by default during RDBMS installation to upload ODCI.jar.Now I'll check if all PDBs have consistent ODCI classes.SQL> ALTER SESSION SET CONTAINER = PDB$SEED;  
Session altered.
SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
DBMS_JAVA.LONGNAME(OBJECT_NAME)
--------------------------------------------------------------------------------
oracle/ODCI/ODCITabFuncStatsRef
oracle/ODCI/ODCICompQueryInfoRef
....
oracle/ODCI/ODCIVarchar2List
63 rows selected.
SQL> ALTER SESSION SET CONTAINER = WIKI_PDB;
Session altered.
SQL> select dbms_java.longname(object_name) from dba_objects where object_type='JAVA CLASS' and dbms_java.longname(object_name) like '%ODCI%';
DBMS_JAVA.LONGNAME(OBJECT_NAME)
--------------------------------------------------------------------------------
oracle/ODCI/ODCIArgDesc
....
63 rows selected.Finally all PDBs where patched with a new library.More information about Development Java within RDBMS in multitenant environments are in this presentation, The impact of MultiTenant Architecture in the develop of Java within the RDBMS, for Spanish readers there is video with audio at YouTube from my talk at OTN Tour 14 ArOUG:







Enabling Agents of Change

Linda Fishman Hoyle - Mon, 2014-11-17 15:29

The Oracle Value Chain Summit is rapidly becoming the premier Supply Chain event in the industry.  Our upcoming 2015 Value Chain Summit marks the third year that this great event has been held, and we expect this summit to be bigger and better than ever. This event is all about you, our customer. You empower your supply chain, and the Value Chain Summit promises to provide you with the insight, contacts and tools you need to become “Agents of Change” for your supply chain.

Watch this video (by Oracle's Jon Chorley, pictured left) to learn more about the great things we have planned for the 2015 Value Chain Summit. Plus, you can take advantage of the Early Bird rate through the end of November, and save $300 off the regular rate. Combine this offer with special group rates, and save BIG!  

 

"Why Not Oracle Cloud?" for Fast-Growing, Mid-Sized Organizations

Linda Fishman Hoyle - Mon, 2014-11-17 15:11

In the past, mid-size and smaller companies have had to settle for “lightweight or scoped down ERP solutions.” Generally, these tier-2 solutions don’t have the functionality that a company needs to grow and operate globally.

In this article in Profit magazine, Oracle’s Rondy Ng, Senior Vice President of Applications Development, advises companies to choose wisely when looking at cloud-based ERP solutions to avoid expense, risk, and disruption down the road.

Ng asserts that Oracle ERP Cloud is the ERP solution for Fortune 500 companies, as well as for those who don’t have any designs to be one. There’s no need to settle. He makes a great case for choosing cloud and choosing Oracle.

New Interaction Hub Data Sheet Available

PeopleSoft Technology Blog - Mon, 2014-11-17 14:55

In support of the recently released Revision 3 of the PeopleSoft Interaction Hub, we've just produced the latest data sheet for the Hub, which can be found here.  This paper covers the highlights of the new release, and describes the our overall direction for the product.  The prime topics we cover are as follows:

  • Setting up and running a cluster of PeopleSoft applications using the Interaction Hub
  • Content Management
  • Branding and the User Experience
  • Using the Hub with the new Fluid User Interface
There is much more collateral about the Interaction Hub on My Oracle Support

Visualization on How the undergraduate tuition has increased over the years

Nilesh Jethwa - Mon, 2014-11-17 12:54

Average undergraduate tuition and fees and room and board rates

Source: http://nces.ed.gov/

Image

These figures are inflation adjusted and look how just the tuition fees have increased compared to the Dorm and Board rates

Now comparing the rate increase for 2-year program

Image

So for the 2 year program, the board rates have remained at the same level compared to the dorm rates.

Now check out the interesting graph for 4 year program below

Image

 

Comparing the slope of 2 year Board rates to the 4 year Board rates, the 4 year has significant increase

Image

If price of meals is same for both programs then both 4 year and 2 year programs should have the same slope. So why is the 4 year slope different than 2 year?

Now, let see about the Dorm rates

Image

 

And finally the 4 year vs 2 year Tuition rates

Image

Here is the data table for the above visualization

Musings on Samsung Developer Conference 2014

Oracle AppsLab - Mon, 2014-11-17 11:18

This year some of us at the AppsLab attended the Samsung Developer Conference aka #SDC2014. Last year it was Samsung’s first attempt and we were also there. The quality and caliber of presentations increased tenfold from last year. Frankly, Samsung is doing it really hard to resist to join their ecosystem.

sdc2014

Here are some of the trends I observed:

Wearables and Health:

There was a huge emphasis in Samsung’s commitment with wearable technology. They released a new Tizen based smartwatch (Samsung Gear S) as well as a biometric reference design hardware and software called SIMBAND. Along with their wearable strategy they also released S.A.M.I, a cloud repository to store all this data. All this ties together with their vision of “Voice of the Body.”

Voice of the Body from Samsung on Vimeo.

During the second day keynote we got to hear from Mounir Zok Senior Sports Technologist of the United States Olympic Committee. He told us of how wearable technology is changing they way Olympic athletes are training. It was only a couple years ago when athletes still had to go to a lab and “fake” actual activities to get feedback. Now they can actually get real data on the field thanks to wearable technology.

Virtual Reality:

Samsung released the Gear VR in partnership with Oculus. This goggles can only work with a mounted Galaxy Note 4 in the front. The gaming experiences with this VR devices are amazing. But they are also exploring other cases like virtual tourism and virtual movie experiences. They released a 3D 360+spherical view camera called “Project Beyond.”

IoT – Home Automation:

Samsung is betting big with IoT and Home Automation and they are putting their money where their mouth is by acquiring SmartThings. The SmartThings platform is open sourced and has the ability to integrate with a myriad of other  home automation products. They showcased a smart home powered by SmartThings platform.

Mobile Innovation: 

I actually really like their new Galaxy Note Edge phablet. Samsung is showing true innovation here with the “edge” part of the device. It has it’s own SDK and it feels great on the hand!

Overall I’m pretty impressed with what Samsung is doing. It seems like their spaghetti-on-the-wall approach (throwing a bunch spaghetti and see what sticks) is starting to pay off.  Their whole UX across devices looks seamless. And in my humble approach they are getting ready to take off on their own without having to use Android for their mobile devices. Tizen keeps maturing, but I shall leave that for another post!

Please feel free to share your experience with Samsung devices as well!Possibly Related Posts:

Asteroid Hackathon – The Winning Team

Oracle AppsLab - Mon, 2014-11-17 09:57

Editorial Note: This is a guest post by friend of the ‘Lab and colleague DJ Ursal. Also be sure to check out our Hackathon entry here:

P1230269(2)

EchoUser (@EchoUser), in partnership with SpaceGAMBIT, Maui Makers, the Minor Planet Center, NASA, the SETI Institute, and Further by Design, hosted an Asteroid Hackathon. The event was in response to the NASA Grand Challenge, “focused on finding all asteroid threats to human populations and knowing what to do about them.”

I had a wonderful opportunity to participate in the Asteriod Hackathon last week. MY team name was NOVA. Our team comprised for 4 team members – DJ Ursal, Kris Robison, Daniel Schwartz, Raj Krishnamurthy

We were given live data from NASA and Minor Planet site and literally just had 5 hours to put together a working prototype and solution to the Asteroid big data problem.  We created a web application (works not only on your MAC or PC but also on your iPad and your latest Nexus 7 Android devices) which would help scientists, astronomers and anyone who is interested in Asteriods discover, learn and share information in a fun and interactive way.

P1230307(9)

Our main them was Finding Asteroids Before They Find Us. The goal was to help discover, learn and share Asteroids information to increase awareness within the community.  We created an interactive web app that allowed users to make use of chart filters to find out about the risk for possibilities of future impact with Earth. Find out about the distance of the asteroids to Earth, absolute brightness and rotation of the Asteroid. It allowed users to click and drag on any chart to filter, so that they could transform the filters in multidimensional  way in order to explorer, discover , interesting facts and share data on asteroids with riends and community. We made use of Major Tom who  is an astronaut referenced in David Bowie’s songs “Space Oddity. “Space Oddity” depicts an astronaut who casually slips the bonds of the world to journey beyond the stars. Users could post questions to Major Tom and could also play his song.

The single most important element about WINNING this hackathon  strategically was  team composition. Having a team that is effective working together. Collaboration and communication skills were the two of most critical personal skills demanded of all members as time was limited and communication and coordination of utmost importance.

Winning TEAM NOVA- DJ Ursal, Kris Robison, Daniel Schwartz, Raj Krishnamurthy Possibly Related Posts: