Feed aggregator

Oracle Priority Support Infogram for 08-OCT-2015

Oracle Infogram - Thu, 2015-10-08 14:46

Oracle OpenWorld

The session schedules are rolling in. Here are a few:






RDBMS

Last week on AskTom, from All Things SQL.

Database Insider - October 2015 issue now available, from Exadata Partner Community – EMEA.


Security

What Is SQL Injection and How to Stop It, from All Things SQL. It’s been a while since we ran a posting on SQL injection. It’s an easy trap to fall into in coding, so this may be a good time to review your apps and make sure you aren’t vulnerable.

Java


Hyperion



EPM Patch Set Updates - September 2015, from Business Analytics - Proactive Support.

Oracle Utilities

Oracle Utilities Customer Care and Billing 2.5.0.1.0 available, from The Shorten Spot (@theshortenspot).

Demantra


User Defined Field Fact History, from the Oracle Primavera Analytics Blog.

Opinion

A bit of product evangelism combined with some prophecy and analysis on ZFS storage: This Is Our Time, from The Wonders of ZFS Storage.

EBS

From the Oracle E-Business Suite Support blog:




From the Oracle E-Business Suite Technology blog:




PeopleSoft Streams from Oracle University

Jim Marion - Thu, 2015-10-08 14:45

In February of this year, Oracle University launched the PeopleSoft Learning Stream. Oracle's Learning Streams are short, educational vignettes. I was given the privilege of recording 6 streams:

  • Using JavaScript with Pagelet Wizard is a 21 minute video showing you how to use Pagelet Wizard to convert a PeopleSoft query into an interactive D3 chart, a navigation collection into a carousel, a navigation collection into an accordion, and RequireJS for JavaScript dependency management.
  • REST Query Access Service is a 15 minute session showing you how to craft a Query Access Service REST URL.
  • Working with JSON in PeopleSoft Document Technology is a 23 minute video demonstrating how to use the PeopleCode Document, Compound, and Collection objects to read and write JSON.
  • Basic Java API with PeopleCode is a 26 minute session showing you how to use the delivered Java API with PeopleCode. This session covers constructors, instance methods, properties, and static method invocation. Java objects demonstrated include String, Hashtable, Regular Expression Pattern and Matcher, arrays, and String.format.
  • Intermediate Java API with PeopleCode is a 38 minute video that shows you how to configure JDeveloper to write Java for the PeopleSoft Application and Process Scheduler servers and provides some examples of writing and deploying Java to a PeopleSoft application server. Note: in this session you get to watch me attempt to troubleshoot an App Engine ABEND.
  • Advanced Java API with PeopleCode is a 26 minute recording showing you how to use Java Reflection to remove PeopleCode ambiguity as well as how to use JavaScript to avoid reflection.

You can access all of my streams here. From this page you can preview the first 2 minutes of each video or subscribe for unlimited access to all of the videos in the Oracle PeopleSoft Learning Stream.

Presidents of USA and their Birth Signs – Sankey Visualization

Nilesh Jethwa - Thu, 2015-10-08 14:00

In this analysis, we will visualize the relation between the Age at Presidency, State of Birth and birth sign.

Read more at: www.infocaptor.com/dashboard/presidents-of-usa-and-their-birth-signs-sankey-visualization

How to delete older emails from GMAIL

Arun Bavera - Wed, 2015-10-07 09:40

image

image

 

Other category:

category: social older_than:45d

Categories: Development

About My Son, Chris Silva, Amazing Artist, Father and All-Around Human Being

FeuerThoughts - Tue, 2015-10-06 14:59
"For the record...."




Chris is the 2015 recipient of a 3arts grant, which makes me incredibly proud and also gives me the opportunity to share his professional art bio (I mostly experience him these days as Papa to my two wonderful granddaughters).

Born in Puerto Rico, Chris Silva has been a prominent figure in Chicago’s graffiti and skateboarding scenes since the 1980s, as well as an enthusiastic fan of a wide range of music genres which have resulted from the influence of metropolitan life. Building on his solid graffiti art foundation, Silva proceeded to play a significant role in the development of what is now commonly referred to as "street art." He now splits his time between working on large-scale commissions, producing gallery oriented work, and leading youth-involved public art projects. As a self-taught sound artist with roots in DJ culture, Silva also anchors a collaborative recording project known as This Mother Falcon, and has recently started integrating his audio compositions into his installation work.

In the early 90s, Silva worked on a mural with the Chicago Public Art Group and was eventually brought on board to help lead community art projects with other urban youth. As a result, the act of facilitating art experiences for young people has become an important part of his art practice, and he regularly includes students as collaborators on large-scale artwork that often leans heavily on improvisation. Over the years, Silva has helped orchestrate youth art projects both independently and in partnership with Chicago Public Art Group, Young Chicago Authors, Gallery 37, Yollocalli Arts Reach, After School Matters, and the School of The Art Institute of Chicago.

Silva was awarded a major public art commission by the Chicago Transit Authority to create a mosaic for the Pink Line California Station (2004); created block-long murals in Chicago's Loop “You Are Beautiful” (2006); created a sculpture for the Seattle Sound Transit System (2008); won the Juried Award for Best 3D Piece at Artprize (2012); and created large commissions for 1871 Chicago (2013), the City of Chicago, LinkedIn, CBRE (2014), OFS Brands, and The Prudential Building (2015). He has exhibited in Chicago, San Francisco, Los Angeles, New York City, Philadelphia, London, Melbourne, Copenhagen, and The International Space Station. In 2007 Silva received an Artist Fellowship Award from The Illinois Arts Council.
Categories: Development

Fundamentals of SQL Writeback in Dodeca

Tim Tow - Mon, 2015-10-05 22:00
One of the features of Dodeca is read-write functionality to SQL databases.  We often get questions as to how to write data back to a relational database, so I thought I would post a quick blog entry for our customers to reference.

This example will use a simple table structure in SQL Server though the concepts are the same when using Oracle, DB2, and most other relational databases.  The example will use a simple Dodeca connection to a JDBC database.  Here is the Dodeca SQL Connection object used for the connection.

The table I will use for this example was created with the following CREATE TABLE  statement.

CREATE TABLE [dbo].[Test](
[TestID] [int] IDENTITY(1,1) NOT NULL,
[TestCode] [nvarchar](50) NULL,
[TestName] [nvarchar](50) NULL,
  CONSTRAINT [PK_Test] PRIMARY KEY CLUSTERED 
  ([TestID] ASC)
)

First, I used the Dodeca SQL Excel View Wizard to create a simple view in Dodeca to retrieve the data into a spreadsheet.  The view, before setting up writeback capabilities, looks like this.

To make this view writeable, follow these steps.
  1. Add the appropriate SQL insert, update, or delete statements to the Dodeca SQL Passthrough Dataset object.  The values to be replaced in the SQL statement must be specified using the notation @ColumnName where ColumnName is the column name, or column alias, of the column containing the data.
  2. Add the column names of the primary key for the table to the PrimaryKey property of the SQL Passthrough DataSet object.
  3. Depending on the database used, define the column names and their respective JDBC datatypes in the Columns property of the SQL Passthrough Dataset.  This mapping is optional for SQL Server because Dodeca can obtain the required information from the Microsoft JDBC driver, however, the Oracle and DB2 JDBC drivers do not provide this information and it must be entered by the developer.
For insert, update, and delete operations, Dodeca parses the SQL statement to read the parameters that use the @ indicator and creates a JDBC prepared statement to execute the statements.  The prepared statement format is very efficient as it compiles the SQL statement once and then executes it multiple times.  Each inserted row is also passed to the server during the transaction.  The values from each row are then used in conjunction with the prepared statement to perform the operation.

Here is the completed Query definition.


Next, modify the DataSetRanges property of the Dodeca View object and, to enable insert operations, set the AllowAddRow property to True.  Note that if you added update and/or delete SQL to your SQL Passthrough Dataset object, be sure to enable those operations on the worksheet via the AllowDeleteRow and AllowModifyRow properties.

Once this step is complete, you can run the Dodeca View, add a row, and press the Save button to save the record to the relational database.



The insert, update, and delete functionalities using plain SQL statements is limited to operations on a single table.  If you need to do updates on multiple tables, you must use stored procedures to accomplish the functionality.  You can call a stored procedure in Dodeca using syntax similar to the following example:

{call sp_InsertTest(@TestCode, @TestName)}

Dodeca customers can contact support for further information at support@appliedolap.com.
Categories: BI & Warehousing

IBM Bluemix - Specify only Liberty buildpack features you require

Pas Apicella - Mon, 2015-10-05 21:22
I am more often then not using spring boot applications on IBM Bluemix and most of what I need is packaged with the application from JPA or JDBC, drivers, Rest etc. Of course with IBM Bluemix we can specify which build pack we wish to use but by default for java applications LIberty is used.

When a stand-alone application is deployed, a default Liberty configuration is provided for the application. The default configuration enables the following Liberty features:
  • beanValidation-1.1
  • cdi-1.2
  • ejbLite-3.2
  • el-3.0
  • jaxrs-2.0
  • jdbc-4.1
  • jndi-1.0
  • jpa-2.1
  • jsf-2.2
  • jsonp-1.0
  • jsp-2.3
  • managedBeans-1.0
  • servlet-3.1
  • websocket-1.1
  • icap:managementConnector-1.0
  • appstate-1.0
Here is how I strip out some of what isn't required in my Liberty runtime container to a bare minimal of what I need.

manifest.yml

applications:
 - name: pas-speedtest
   memory: 512M
   instances: 1
   path: ./demo-0.0.1-SNAPSHOT.jar
   host: pas-speedtest
   domain: mybluemix.net
   env:
     JBP_CONFIG_LIBERTY: "app_archive: {features: [jsp-2.3, websocket-1.1, servlet-3.1]}"


 More Information

https://www.ng.bluemix.net/docs/starters/liberty/index.html#optionsforpushinglibertyapplications


Categories: Fusion Middleware

Uploading 26M StackOverflow Questions into Oracle 12c

Marcelo Ochoa - Mon, 2015-10-05 17:42
Just for fun or testing in-memory capabilities of Oracle 12c

Following the post Import 10M Stack Overflow Questions into Neo4j In Just 3 Minutes I modified the python script to basically include the foreign key columns not included into the graph database design and required in a relational model.
Python files to_csv.py and utils.py can be download from my drive, basically it adds these two lines:
                el.get('parentid'),
                el.get('owneruserid'),
when generating the output file csvs/posts.csv, the idea is to convert the StackOverflow export files:
-rw-r--r-- 1 root root   37286997 ago 18 12:50 stackoverflow.com-PostLinks.7z
-rw-r--r-- 1 root root 7816218683 ago 18 13:52 stackoverflow.com-Posts.7z
-rw-r--r-- 1 root root     586861 ago 18 13:52 stackoverflow.com-Tags.7z
-rw-r--r-- 1 root root  160468734 ago 18 13:54 stackoverflow.com-Users.7z
-rw-r--r-- 1 root root  524354790 ago 18 13:58 stackoverflow.com-Votes.7z
-rw-r--r-- 1 root root 2379415989 sep  2 14:28 stackoverflow.com-Comments.7z
-rw-r--r-- 1 root root  112105812 sep  2 14:29 stackoverflow.com-Badges.7z
to a list of CSV files for quick importing into Oracle 12c RDBMS using external tables, here the list of converted files and theirs sizes:
3,8G         posts.csv
287M posts_rel.csv
524K tags.csv
517M tags_posts_rel.csv
355M users.csv
427M users_posts_rel.csv
with above files and an Oracle 12c running in a Docker container as is described into my previous post On docker, Ubuntu and Oracle RDBMS, I executed these steps:
- logged as SYSalter system set sga_max_size=4G scope=spfile;
alter system set sga_target=4G scope=spfile;
alter system set inmemory_size=2G scope=spfile;
create user sh identified by sh
   default tablespace ts_data
   temporary tablespace temp
   quota unlimited on ts_data;
grant connect,resource,luceneuser to sh;
create directory data_dir1 as '/mnt';
create directory tmp_dir as '/tmp';
grant all on directory data_dir1 to sh;
grant all on directory tmp_dir to sh;
it basically create a new user and directories to be used by the external tables. Note that the CSV files are available into the Docker machine as /mnt directory, I am running my Docker images with:
docker run --privileged=true --ipc=host --volume=/var/lib/docker/dockerfiles/stackoverflow.com/csvs:/mnt --volume=/mnt/backup/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true --publish=1521:1521 --publish=9099:9099 oracle-12102
Then logged as SH user:
- Importing users
create table users_external
( user_id            NUMBER(10),
  display_name VARCHAR2(4000),
  reputation       NUMBER(10),
  aboutme         VARCHAR2(4000),
  website_url    VARCHAR2(4000),
  location          VARCHAR2(4000),
  profileimage_url VARCHAR2(4000),
  views             NUMBER(10),
  upvotes          NUMBER(10),
  downvotes     NUMBER(10)
)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
    fields
            terminated BY ','
            optionally enclosed BY '"'
            lrtrim
            missing field VALUES are NULL
  )
  location (data_dir1:'users.csv')
 )
 parallel
 reject limit unlimited;CREATE TABLE so_users
   TABLESPACE ts_data
   STORAGE (INITIAL 8M NEXT 8M)
   PARALLEL
   NOLOGGING
   COMPRESS FOR ALL OPERATIONS
      as (select * from users_external);
-- Elapsed: 00:00:22.76
ALTER TABLE so_users ADD PRIMARY KEY (user_id);
-- Elapsed: 00:00:13.08
create index so_users_display_name_idx on so_users(display_name);
-- Elapsed: 00:00:08.01
- Importing Posts
create table posts_external
( post_id      NUMBER(10),
  parent_id   NUMBER(10),
  user_id      NUMBER(10),
  title            VARCHAR2(4000),
  body          CLOB,
  score         NUMBER(10),
  views        NUMBER(10),
  comments NUMBER(10)
)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
    fields
            terminated BY ','
            optionally enclosed BY '"'
            lrtrim
            missing field VALUES are NULL
  )
  location (data_dir1:'posts.csv')
 )
 parallel
 reject limit unlimited;
CREATE TABLE so_posts
   TABLESPACE ts_data
   STORAGE (INITIAL 8M NEXT 8M)
   PARALLEL
   NOLOGGING
   COMPRESS FOR ALL OPERATIONS
      as (select * from posts_external);
-- Elapsed: 00:14:20.89
ALTER TABLE so_posts ADD PRIMARY KEY (post_id);
-- Elapsed: 00:02:35.86
-- purge posts associated to no imported users
delete from so_posts where user_id not in (select user_id from so_users);
-- Elapsed: 00:02:41.64
create index so_posts_user_id_idx on so_posts(user_id);
-- Elapsed: 00:01:34.87
ALTER TABLE so_posts ADD CONSTRAINT fk_so_user FOREIGN KEY (user_id) REFERENCES so_users(user_id);
-- Elapsed: 00:00:09.28
Note that 26 million posts where imported in 14 minutes, not so bad considering that CSV source was at an external USB 2.0 drive and Oracle 12c tablespaces where placed at an USB 3.0 drive, here a screenshot showing the IO bandwidth consumed in both drivers.

only 4.8 Mb/s for reading from sdb (CSV) and 9.7 Mb/s for writing at sdc1 (ts_data).
- Importing tags
create table tags_external
( tag_id      VARCHAR2(4000)
)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
    fields
            terminated BY ','
            optionally enclosed BY '"'
            lrtrim
            missing field VALUES are NULL
  )
  location (data_dir1:'tags.csv')
 )
 parallel
 reject limit unlimited;
CREATE TABLE so_tags
   TABLESPACE ts_data
   STORAGE (INITIAL 8M NEXT 8M)
   PARALLEL
   NOLOGGING
   COMPRESS FOR ALL OPERATIONS
      as (select * from tags_external);
-- Elapsed: 00:00:00.55
create table tags_posts_external
( post_id      NUMBER(10),
  tag_id      VARCHAR2(4000)
)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
    fields
            terminated BY ','
            optionally enclosed BY '"'
            lrtrim
            missing field VALUES are NULL
  )
  location (data_dir1:'tags_posts_rel.csv')
 )
 parallel
 reject limit unlimited;
CREATE TABLE so_tags_posts
   TABLESPACE ts_data
   STORAGE (INITIAL 8M NEXT 8M)
   PARALLEL
   NOLOGGING
   COMPRESS FOR ALL OPERATIONS
      as (select * from tags_posts_external);
-- Elapsed: 00:00:43.75
-- purge tags associated to no imported posts
delete from so_tags_posts where post_id not in (select post_id from so_posts);
-- Elapsed: 00:02:42.00
create index so_tags_posts_post_id_idx on so_tags_posts(post_id);
-- Elapsed: 00:00:43.29
ALTER TABLE so_tags_posts ADD CONSTRAINT fk_so_posts FOREIGN KEY (post_id) REFERENCES so_posts(post_id);
-- Elapsed: 00:01:16.65
Note that as in posts<->users one-to-many relation, tags<->posts is also a one-to-many relation and some posts referenced by a few tags where not imported due character-encoding errors.
As a summary of the above steps 26 millions posts of 4.5 millions registered users where imported; 41K distinct tags are used with an average of 1.11 tag by post (29M tags/posts rows).
Next blog post will be about using Oracle 12c in-memory features to query this corpus data.

OTN at Oracle OpenWorld Group - Join today!

OTN TechBlog - Mon, 2015-10-05 12:27

Join the OTN at Oracle OpenWorld group on the OTN Community Platform!  This group is designed to keep you in the know about all the GREAT activities and events that the Team OTN is planning/organizing for Oracle OpenWorld in San Francisco this October (24th to 28th).

Some of the events/activities to look forward to -

Community Events - RAC Attack and Blogger Meetup.

Networking Opportunities - Sunday Kick off Party, Cloud Hour

NEW activities! Graffiti Wall and giant games plus Make Your Own T-Shirt is back with NEW art!

15221_OTN Lounge-Graphics_FINAL 1.jpg 15221_OTN Lounge-Graphics_FINAL 4.jpg15221_OTN Lounge-Graphics_FINAL 5.jpg15221_OTN Lounge-Graphics_FINAL 7.jpg15221_OTN Lounge-Graphics_FINAL 8.jpg

We hope to see you there!

TEAM OTN


What I Wanted to Tell Terry Bradshaw

Cary Millsap - Thu, 2015-10-01 17:23
I met Terry Bradshaw one time. It was about ten years ago, in front of a movie theater near where I live.

When I was little, Terry Bradshaw was my enemy because, unforgivably to a young boy, he and his Pittsburgh Steelers kept beating my beloved Dallas Cowboys in Super Bowls. As I grew up, though, his personality on TV talk shows won me over, and I enjoy watching him to this day on Fox NFL Sunday. After learning a little bit about his life, I’ve grown to really admire and respect him.

I had heard that he owned a ranch not too far from where I live, and so I had it in mind that inevitably I would meet him someday, and I would say thank you. One day I had that chance.

I completely blew it.

My wife and I saw him there at the theater one day, standing by himself not far from us. It seemed like if I were to walk over and say hi, maybe it wouldn’t bother him. So I walked over, a little bit nervous. I shook his hand, and I said, “Mr. Bradshaw, hi, my name is Cary.” I would then say this:

I was a big Roger Staubach fan growing up. I watched Cowboys vs. Steelers like I was watching Good vs. Evil.

But as I’ve grown up, I have gained the deepest admiration and respect for you. You were a tremendous competitor, and you’re one of my favorite people to see on TV. Every time I see you, you bring a smile to my face. You’ve brought joy to a lot of people.

I just wanted to say thank you.
Yep, that’s what I would say to Terry Bradshaw if I got the chance. But that’s not how it would turn out. How it actually went was like this, …my big chance:

Me: I was a big Roger Staubach fan growing up.
TB: Hey, so was I!
Me: (stunned)
TB: (turns away)
The End
I was heartbroken. It bothers me still today. If you know Terry Bradshaw or someone who does, I wish you would please let him know. It would mean a lot to me.

…I did learn something that day about the elevator pitch.

Oracle Priority Support Infogram for 01-OCT-2015

Oracle Infogram - Thu, 2015-10-01 14:42

RDBMS


PL/SQL

A Surprising Program, from Oracle Database PL/SQL and EBR.

Data Warehouse

DOP Downgrades, or Avoid The Ceiling, from The Data Warehouse Insider blog.

WebLogic


Java


Creating Games with JavaFX 8: Case Study, from The Java Tutorials Blog.

OAG

The 10 most recently created notes for OAG as of 24 Sept. 2015., from Proactive Support - Java Development using Oracle Tools.

Ops Center

Changing an Asset's Name, from the Oracle Ops Center blog.

Data Integration


SOA

Top tweets SOA Partner Community – September 2015, from the the SOA & BPM Partner Community Blog.

Real User Monitoring

How to Configure Used ID Identification, from Real User Monitoring.

Solaris

Solaris: Identifying EFI disks, from Giri Mandalika's Repository.

EBS

From the Oracle E-Business Suite Support blog:




Finally Eliminate Those Duplicate WIP Transactions!


Generate 11g password hash

Laurent Schneider - Thu, 2015-10-01 10:14

An easy way to generate a value string from the ssl is to use openssl

Let’s take a random salt of ABCDEFGHIJ. The length of 10 is important.

The hexadecimal representation is -41-42-43-44-45-46-47-48-49-4A-


$ echo "SafePassw0rDABCDEFGHIJ\c" | openssl dgst -sha1
(stdin)= 47cc4102144d6e479ef3d776ccd9e0d0158842bb

With this hash, I can construct my value


SQL> create user testuser identified by values 'S:47CC4102144D6E479EF3D776CCD9E0D0158842BB4142434445464748494A';

User created.

SQL> grant create session to testuser;

Grant succeeded.

SQL> conn testuser/SafePassw0rD
Connected.

If you prefer PL/SQL over shell, use DBMS_CRYPTO


SQL> exec dbms_output.put_line('S:'||dbms_crypto.hash(utl_raw.cast_to_raw('SafePassw0rDABCDEFGHIJ'),dbms_crypto.HASH_SH1)||utl_raw.cast_to_raw('ABCDEFGHIJ'))
S:47CC4102144D6E479EF3D776CCD9E0D0158842BB4142434445464748494A

PL/SQL procedure successfully completed.

In 12c there is also a “T” String. According to the doc
The cryptographic hash function used for generating the 12C verifier is based on a de-optimized algorithm involving PBKDF2 and SHA-512.

IBM Bluemix - Triggerring backing service creation from "Deploy to Bluemix" button

Pas Apicella - Thu, 2015-10-01 08:01
I recently posted about the "Deploy to Bluemix" button which will automatically deploy an application into IBM Bluemix from a single click.

http://theblasfrompas.blogspot.com.au/2015/09/adding-deploy-to-bluemix-button-to-my.html

If the application requires backing services they can automatically be created using the"declared-services" tag as shown below. Declared services are a manifest extension, which creates or looks for the required or optional services that are expected to be set up before the app is deployed, such as a data cache service. You can find a list of the eligible Bluemix services, labels, and plans by using the CF Command Line Interface and running cf marketplace

manifest.yml

declared-services:
  redis-session:
    label: rediscloud
    plan: 30mb
applications:
 - name: pas-sbsessions
   memory: 512M
   instances: 2
   path: ./target/SpringBootHTTPSession-0.0.1-SNAPSHOT.jar
   host: pas-sbsessions
   domain: mybluemix.net
   buildpack: java_buildpack
   services:
    - redis-session


Note: Declared services is an IBM extension of the standard Cloud Foundry manifest format. This extension might be revised in a future release as the feature evolves and improves.
Categories: Fusion Middleware

JavaScript on the App Server: Scripting PeopleCode

Jim Marion - Wed, 2015-09-30 21:25

It has been nearly a decade since I started playing with JavaScript on the PeopleSoft application server. Back then I had to deploy a couple of JAR files to the app server. At that time, maintaining and deploying unmanaged files seemed more headache than benefit. Today Java provides full scripting support through the ScriptEngineManager and embedded Mozilla Rhino JavaScript script engine. Why would I want to script PeopleCode? Here are a few of my favorite reasons:

  • Low-level socket communication
  • Avoid reflection: JavaScript executes all methods regardless of variable type whereas PeopleCode only recognizes the returned type, not the real type
  • Process simple JSON structures that can't be modeled with the Documents module

Here is the PeopleCode required to invoke JavaScript

Local JavaObject &manager =  CreateJavaObject("javax.script.ScriptEngineManager");
Local JavaObject &engine = &manager.getEngineByName("JavaScript");

REM ** Evaluate a simple JavaScript;
&engine.eval("var result = Math.random();");

REM ** Access the value of the JavaScript variable named result;
Local string &result_text = &engine.get("result").toString();

Here is some JavaScript that converts the variable &json_string into a JSON Array and then iterates over each entry, inserting values into a table. Notice that I'm invoking the PeopleCode SQLExec function from JavaScript.

var result = (function() {
var SQLExec = Packages.PeopleSoft.PeopleCode.Func.SQLExec;
var json = JSON.parse(json_string);
var count = 0;
json.forEach(function(item, idx) {
SQLExec("INSERT INTO ... SYSTIMESTAMP", [idx, item]
);
count++;
});
return count + " rows inserted";
}());

Where did that &json_string variable come from? Here:

&engine.put("json_string", "[""item1"", ""item2"", ""item3""]");

OpenWorld 2015 Conference Schedule

Jim Marion - Wed, 2015-09-30 14:52

Just a couple more weeks and we will be enjoying the great weather and hospitality of San Francisco. I am anxiously anticipating another great OpenWorld conference. As always, I look forward to meeting with you between sessions or in the demo grounds. I will be presenting "PeopleSoft Developer: Tips and Techniques [CON8596]" on Monday, Oct 26 at 12:15 p.m. in Moscone West—3007.

I find the OpenWorld/JavaOne content catalog a little intimidating. If you are presenting a PeopleTools topic, please post your session details in the comments below to help the rest of us find PeopleTools-related sessions.

DOAG Conference Presentation Summary Finished

Dietmar Aust - Wed, 2015-09-30 14:48
Hi guys,

today was the deadline for uploading the four-page abstract / summary of my presentation at the German Oracle conference in Nürnberg:



If you still have to upload yours, hurry up ... only a few hours left ;).

You can have a look here (it is German though ;). 

See you in Nürnberg in November.

Cheers,
~Dietmar.


DAM tools, IBM Guardium, Oracle E-Business Suite, PeopleSoft and SAP

A question we have answered a few times in the last few months is whether or not, and if so, how easy do Database Activity Monitoring (DAM) tools such as IBM Guardium support ERP platforms such as the Oracle E-Business Suite, PeopleSoft and SAP. The answer is yes; DAM tools can support ERP systems. For example, IBM Guardium has out-of-the-box policies for both the E-Business Suite and SAP – see figures one and two below.

There are many advantages to deploying a DAM solution to protect your ERP platform, the first being additional defense-in-depth for one of your most critical assets. You can read more here ( Integrigy Guide to Auditing and Logging in Oracle E-Business Suite)  about Integrigy’s recommendations for database security programs. DAM solutions allow for complex reporting as well as 24x7 monitoring and easy relaying of alerts to your SIEM (e.g. Splunk or ArcSight).

Deploying DAM solutions to protect your SAP, PeopleSoft or E-Business Suite is a not-plug-and-play exercise. IBM Guardium’s out-of-the-box policies for the E-Business Suite require configuration to be of any value – see figure three below. The out-of-the-box DAM policies are a good starting point and Integrigy rarely sees them implemented as is. Integrigy also highly recommends, if at all possible, to complete a sensitive data discovery project prior to designing your initial DAM policies. Such projects greatly help to define requirements as well as offer opportunities for data clean up.

Overall, to design and implement an initial set of Guardium policies for the E-Business Suite (or any other ERP package) is usually a few weeks of effort depending on your size and complexity.

If you have any questions, please contact us at info@integrigy.com

Figure 1- Seeded Guardium Policies for EBS and SAP

Figure 2- Guardium E-Business Suite PCI Policy

Figure 3- Example of Blank Configuration

 

 

 

Auditing, Oracle E-Business Suite, IBM Guardium
Categories: APPS Blogs, Security Blogs

PeopleTools Mobile Book Now Shipping

Jim Marion - Wed, 2015-09-30 10:02

I received notice yesterday that our latest book, PeopleSoft PeopleTools: Mobile Applications Development (Oracle Press) 1st Edition, is now shipping. Probably the most exciting news is that Amazon has the book listed as the #1 New Release in the Oracle Databases category.

delete all data

Laurent Schneider - Tue, 2015-09-29 09:53

How do you delete all data? The simplistic approach would be to truncate all tables


SQL> select table_name from user_tables;
TABLE_NAME
----------
T1
SQL> truncate table t1;
Table truncated.

You cannot truncate if you have referential integrity constraints.


SQL> truncate table t2;
ORA-02266: unique/primary keys in table 
  referenced by enabled foreign keys

Ok, let’s disable the RIC


SQL> select table_name, constraint_name
  from user_constraints
  where constraint_type='R';
TAB CONSTRAINT
--- ----------
T3  SYS_C00107
SQL> alter table t3 disable constraint SYS_C00107;
Table altered.
SQL> truncate table t2;
Table truncated.
SQL> truncate table t3;
Table truncated.

You cannot truncate cluster tables


SQL> truncate table t4;
ORA-03292: Table to be truncated is part of a cluster

Cluster tables could be dropped with TRUNCATE CLUSTER.


SQL> select cluster_name from user_clusters;
CLUSTER_NAME                  
------------
C                             
SQL> truncate cluster c;
Cluster truncated.

The code above doesn’t work with Partitioned cluster (12.1.0.2) because it was not properly implemented at the time of the writing.
Check Bug 20284579 : CAN NOT QUERY DYNAMIC CLUSTER PARTITIONS

For reference partitioning, it is not possible to disable the foreign key


SQL> alter table t6 disable constraint fk;
ORA-14650: operation not supported for 
  reference-partitioned tables

In 12c, if the foreign key is defined with ON DELETE CASCADE, you can truncate cascade the parent.


SQL> select table_name, REF_PTN_CONSTRAINT_NAME
  from user_part_tables 
  where partitioning_type='REFERENCE';
TAB REF
--- ---
T6  FK 
SQL> select r_constraint_name, delete_rule 
  from user_constraints 
  where constraint_name='FK';
R_CON DELETE_RULE
----- -----------
PK    CASCADE    
SQL> select table_name
  from user_constraints 
  where constraint_name='PK';
TAB
---
T5 
SQL> truncate table t5 cascade;
Table truncated.

But if one of child or child-child table is using reference partitioning without the ON DELETE CASCADE, then the parent or grand-parent could not be truncated. And truncate cascade for reference partitioning is not documented (yet).

But there is very nice alternative to TRUNCATE called is DELETE &#x1f642;


SQL> select table_name, REF_PTN_CONSTRAINT_NAME
  from user_part_tables 
  where partitioning_type='REFERENCE';
TAB REF
--- ---
T8  FK 
SQL> select r_constraint_name, delete_rule 
  from user_constraints 
  where constraint_name='FK';
R_CON DELETE_RULE
----- -----------
PK    NO ACTION  
SQL> select table_name
from user_constraints 
where constraint_name='PK'
TAB
---
T7 
SQL> truncate table t7 cascade;
ORA-14705: unique or primary keys referenced by enabled foreign keys in table "SCOTT"."T8"
SQL> truncate table t8;
Table truncated.
SQL> delete from t7;
2 rows deleted

To get the tables in the right order, parent tables after children, you can do some hierarchical query and then order by rownum desc, a construct I’m using for the first time I confess. Note the leaf tables are truncable.


select c_owner owner, child table_name   
FROM 
  (
    SELECT 
      p_OWNER, parent, nvl(c_owner, a.owner) c_owner,
      nvl(child, a.table_name ) child
    FROM 
    (
      SELECT 
        PT.OWNER P_owner, pt.table_name parent, 
        pt2.owner c_owner, pt2.table_name child
      FROM all_part_tables pt
      JOIN all_constraints c
      ON pt.OWNER = c.owner
        AND PT.TABLE_NAME = c.table_name
        AND c.constraint_type = 'P'
        AND c.status = 'ENABLED'
      JOIN all_constraints r
      ON r.r_owner = c.owner
        AND r.r_constraint_name = c.constraint_name
        AND r.constraint_type = 'R'
        AND r.status = 'ENABLED'
      JOIN all_part_tables pt2
      ON r.owner = pt2.owner
        AND r.constraint_name = pt2.REF_PTN_CONSTRAINT_NAME
        AND pt2.partitioning_type = 'REFERENCE'
    ) t
    RIGHT JOIN all_tables a 
    ON child = table_name and a.owner = T.c_OWNER
  )
where connect_by_isleaf=0  
CONNECT BY parent = PRIOR child and p_owner=PRIOR c_owner
start with parent is null 
order by rownum desc;

OWNER TAB
----- ---
SCOTT T10 
SCOTT T9

Note the query above is very slow. If dictionary-performance is an issue, maybe we could delete all tables and catch exceptions and loop until all tables are empty


SQL> delete from t9;
ORA-02292: integrity constraint (SCOTT.F10) violated - child record found
SQL> delete from t10;
ORA-02292: integrity constraint (SCOTT.F11) violated - child record found
SQL> delete from t11;
1 row deleted.
SQL> delete from t9;
ORA-02292: integrity constraint (SCOTT.F10) violated - child record found
SQL> delete from t10;
1 row deleted.
SQL> delete from t11;
0 row deleted.
SQL> delete from t9;
1 row deleted.
SQL> delete from t10;
0 row deleted.
SQL> delete from t11;
0 row deleted.
SQL> delete from t9;
0 row deleted.
SQL> delete from t10;
0 row deleted.
SQL> delete from t11;
0 row deleted.

If you have close to zero reference-partitioning table, this approach will be more efficient.

Pages

Subscribe to Oracle FAQ aggregator