Feed aggregator

Extend your SOA Domain with Insight

Darwin IT - Mon, 2016-04-11 12:43
Lately I wrote about how to install RealTime Integration Business Insight. It's about installing the software, actually. In the quickstart you'll read that you actually have to extend your domain as well.

It actually states that you can install it in your SOA QuickStart  installment as well, but I didn't try that (yet).

However, you need to extend your domain with the following items:
  • Insight SOA Agent 12.2.1 [soa]
  • Insight Service Bus Agent 12.2.1 [osb]
  • Insight 12.2.1 [soa]
To do so, shutdown your domain (if not done so), but (as I found needed) start (or leave it up) your infra database.

Set your FMW environment, as I put in my fmw12c_env.sh script:
[oracle@darlin-vce-db bin]$ cat ~/bin/fmw12c_env.sh
#!/bin/bash
echo set Fusion MiddleWare 12cR2 environment
export JAVA_HOME=/usr/java/jdk1.8.0_74
export FMW_HOME=/u01/app/oracle/FMW12210
export WL_HOME=${FMW_HOME}/wlserver
export NODEMGR_HOME=/u01/app/work/domains/soabpm12c_dev/nodemanager

export SOA_HOME=$FMW_HOME/soa
export OSB_HOME=$FMW_HOME/osb
export MFT_HOME=$FMW_HOME/mft
#
echo call setWLSEnv.sh
. $FMW_HOME/wlserver/server/bin/setWLSEnv.sh
export PATH=$FMW_HOME/oracle_common/common/bin:$WL_HOME/common/bin/:$WL_HOME/server/bin:$PATH[oracle@darlin-vce-db bin]$
... and navigate to the $FMW_HOME/oracle_common/common/bin folder and start config.sh:

[oracle@darlin-vce-db ~]$ . fmw12c_env.sh
set Fusion MiddleWare 12cR2 environment
call setWLSEnv.sh
CLASSPATH=/usr/java/jdk1.8.0_74/lib/tools.jar:/u01/app/oracle/FMW12210/wlserver/modules/features/wlst.wls.classpath.jar:

PATH=/u01/app/oracle/FMW12210/wlserver/server/bin:/u01/app/oracle/FMW12210/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:/usr/java/jdk1.8.0_74/jre/bin:/usr/java/jdk1.8.0_74/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/oracle/.local/bin:/home/oracle/bin:/u01/app/oracle/FMW12210/wlserver/../oracle_common/modules/org.apache.maven_3.2.5/bin

Your environment has been set.
[oracle@darlin-vce-db ~]$ cd $FMW_HOME/oracle_common/common/bin
[oracle@darlin-vce-db bin]$ ls
clonedunpack.sh config_builder.sh pack.sh reconfig.sh
commBaseEnv.sh config.sh prepareCustomProvider.sh setHomeDirs.sh
commEnv.sh configWallet.sh printJarVersions.sh unpack.sh
commExtEnv.sh getproperty.sh qs_config.sh wlst.sh
[oracle@darlin-vce-db bin]$ ./config.sh

In the first screen set the radio button to 'Update an existing domain':

Then Click Next, and check the items listed above:

Click Next, Next, ... Finish.
If you would have checked the 'Deployments' checkbox under the Advanced Configuration, you could have reviewed that the particular deployments are automatically targeted to the BAM, OSB and SOA clusters.

After this you can start your servers and start using insight, for example beginning with the Set up of the Insight Demo Users. This is properly described in the Quickstart Guide. But, as I'm on to it, let me try right a way. The demo users setup is downloadable here. Download it and unzip it in a folder on your server.

First we'll have to set the environment. So I call my neat fmw12_env.sh script first (in a new terminal), and explicitly set the $MW_HOME variable:
[oracle@darlin-vce-db bin]$ . fmw12c_env.sh
set Fusion MiddleWare 12cR2 environment
call setWLSEnv.sh
CLASSPATH=/usr/java/jdk1.8.0_74/lib/tools.jar:/u01/app/oracle/FMW12210/wlserver/modules/features/wlst.wls.classpath.jar:

PATH=/u01/app/oracle/FMW12210/wlserver/server/bin:/u01/app/oracle/FMW12210/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:/usr/java/jdk1.8.0_74/jre/bin:/usr/java/jdk1.8.0_74/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/oracle/.local/bin:/home/oracle/bin:/u01/app/oracle/FMW12210/wlserver/../oracle_common/modules/org.apache.maven_3.2.5/bin

Your environment has been set.
[oracle@darlin-vce-db bin]$ export MW_HOME=$FMW_HOME
[oracle@darlin-vce-db bin]$ echo $MW_HOME
/u01/app/oracle/FMW12210
[oracle@darlin-vce-db bin]$ echo $JAVA_HOME
/usr/java/jdk1.8.0_74
[oracle@darlin-vce-db bin]$ echo $ANT_HOME
/u01/app/oracle/FMW12210/wlserver/../oracle_common/modules/org.apache.ant_1.9.2

We're going to call an ant script that apparently needs the following variables set:
  • MW_HOME= <Middleware home of the environment>
  • JAVA_HOME= <Location of java home>
  • ANT_HOME=$MW_HOME/oracle_common/modules/org.apache.ant_1.9.2
  • PATH=$JAVA_HOME/bin:$ANT_HOME/bin:$PATH
The first one is not set by my script (I called it $FMW_HOME), so I needed to set $MW_HOME to $FMW_HOME, the last three are set by my script.

Running the script with a developer topology domain (everything in the AdminServer or DefaultServer in the SOA QuickStart) will probably go ok. But a stuborn guy as I am tries to do this in a more production like topology with seperate SOA, OSB and BAM clusters. So it turns out that you need to adapt the insight.properties that is in the bin folder of the InsightDemoUserCreation.zip (also when you're not like me, you'll need to review it...).
After editing, mine looks like:

#Insight FOD Automation file

wls.host = darlin-vce-db
wls.port = 7001
soa_server_port = 7005
bam_server_port = 7006
userName = weblogic
passWord = welcome1
oracle_jdbc_url = jdbc:oracle:thin:@darlin-vce-db:1521:ORCL
db_soa_user = DEV_SOAINFRA
oracle_db_password = DEV_SOAINFRA
db_mds_user = DEV_MDS
mds.password = DEV_MDS
jdbc_driver = oracle.jdbc.OracleDriver

When all is right then you can run:
[oracle@darlin-vce-db bin]$ cd /media/sf_Stage/InsightDemoUserCreation/bin/
[oracle@darlin-vce-db bin]$ ant createInsightDemoUsers

Unfortunately I can't show you correct output since, although I seem to have set my properties correctly, I get failures. It turns out that my server (all in one VM) ran so slow, that Insight could not be started due to time outs in getting a database connection....
After restarting BAM all went well, except for the exceptions indicating that the users were already created.

Taking IBM Bluemix OpenWhisk for a Test Drive

Pas Apicella - Mon, 2016-04-11 05:27

OpenWhisk is a new event-driven platform that lets developers quickly and easily build feature-rich apps that automatically trigger responses to events. To read more about it view the link below. In this simple example we will explore it it's use from IBM Bluemix by returning Todays date.

https://developer.ibm.com/open/openwhisk/

Steps

1. Login to Bluemix using http://bluemix.net

2. Click on "Try OpenWhisk" as shown below


3. Once logged in to the new Bluemix Console you should see a screen as follows


At this point we can only use OpenWhisk from the command line

4. Click on "Configure CLI" button to install it

5. Once installed you can verify it's installed as follows

pasapicella@pas-macbook-pro:~/ibm/bluemix/openwhisk/actions$ wsk --help
usage: wsk [-h] [-v] [--apihost hostname] [--apiversion version]
           {action,activation,namespace,package,rule,trigger,sdk,property,list}
           ...

OpenWhisk is a distributed compute service to add event-driven logic to your
apps.

optional arguments:
  -h, --help            show this help message and exit
  -v, --verbose         verbose output
  --apihost hostname    whisk API host
  --apiversion version  whisk API version

available commands:
  {action,activation,namespace,package,rule,trigger,sdk,property,list}
    action              work with actions
    activation          work with activations
    namespace           work with namespaces
    package             work with packages
    rule                work with rules
    trigger             work with triggers
    sdk                 work with the SDK
    property            work with whisk properties
    list                list all triggers, actions, and rules in the registry

Learn more at https://developer.ibm.com/openwhisk fork on GitHub
https://github.com/openwhisk. All trademarks are the property of their
respective owners.


6. Now lets Set your OpenWhisk Namespace and Authorization Key. They are provided when you are taken to the install CLI for OpenWhisk, I simply create a script for this as follows

pasapicella@pas-macbook-pro:~/ibm/bluemix/openwhisk$ cat set-namespace-key
wsk property set --auth your-rather-long-key --namespace "pasapi@au1.ibm.com_dev"

Once run you get output as follows

pasapicella@pas-macbook-pro:~/ibm/bluemix/openwhisk$ ./set-namespace-key
ok: whisk auth set
ok: namespace set to pasapi@au1.ibm.com_dev


7.  Now create an OpenWhisk JavaScript function as shown below

todaysdate.js

/**
 * Todays Date as an OpenWhisk action.
 */
function main(params) {
    var currentTime = new Date();
    return {payload:  'Todays date is, ' + currentTime + '!'};
}

8. Create the OpenWhisk action as shown below

pasapicella@pas-macbook-pro:~/ibm/bluemix/openwhisk/actions$ wsk action create todaysdate todaysdate.js
ok: created action todaysdate


9. Invoke the action as shown below

pasapicella@pas-macbook-pro:~/ibm/bluemix/openwhisk/actions$  wsk action invoke todaysdate --blocking --result
{
    "payload": "Todays date is, Mon Apr 11 2016 10:14:56 GMT+0000 (UTC)!"
}


10. Invoke it again this displaying the full Response Object as shown below

pasapicella@pas-macbook-pro:~/ibm/bluemix/openwhisk/actions$  wsk action invoke todaysdate --blocking
ok: invoked todaysdate with id dbb447b65b06433a8b511187011f5715
response:
{
    "result": {
        "payload": "Todays date is, Mon Apr 11 2016 10:19:07 GMT+0000 (UTC)!"
    },
    "status": "success",
    "success": true
}


11. We can view our current actions using a command as follows, you can see we have added the "todaysdate" function

pasapicella@pas-macbook-pro:~/ibm/bluemix/openwhisk/actions$ wsk list
entities in namespace: pasapi@au1.ibm.com_dev
packages
actions
/pasapi@au1.ibm.com_dev/todaysdate                            private
/pasapi@au1.ibm.com_dev/hello                                     private
triggers
rules


12. Now lets return to the Bluemix Web Console and click on "Inspect activity metrics and logs"
Here you can view what you have invoked


There is clearly more to OpenWhisk then just this BUT for more information the IBM Bluemix docs have some more demos to try out.

https://new-console.ng.bluemix.net/docs/openwhisk/index.html
Categories: Fusion Middleware

installing Tomcat through bitnami

Pat Shuff - Mon, 2016-04-11 02:07
This week we are going to focus on installing Tomcat on to cloud servers. Today we are going to take the easy route. We are going to use bitnami and look at timings of how long everything takes as well as the automatic configurations that it sets up. In previous lesions we talked about linking your cloud accounts to bitnami and will not repeat that instructions. For those that are new to public domain software, Tomcat is a public domain software package that allows you to host java applications similar to WebLogic. I won't get in to a debate over which is better because we will be covering how to install WebLogic in a later blog. I will give you all the information that you need to make that decision of which is best for our company and implementation.

We login to our oracle.bitnami.com web site and verify our account credentials.

We want to launch a Tomcat server so we search for Tomcat and hover over the icon. When we hover over the icon the word Launch appears and we click on this button.

Once we click Launch we get the virtual machine configuration screen.

Things to note on this screen are

  • the name is what you want to add to identify the virtual macine
  • the cloud account identifies which data center, if it is metered or un-metered, and what shapes will be available to this virtual machine
  • the network is automatically configured for ports 80 and 443 and enabled not only in the cloud network security configuration but in the operating system as well
  • the operating system gives you the option but we default to OEL 6.7
  • we could increase the disk size and select the memory/cpu option but it does not show us the cost because bitnami does not know if your account is metered or un-metered which have different costing models.

After we click the create button we get an update that shows us how the installation is progressing. The installation took just under 15 minutes to finish everything, launch the instance, and show us the configuration.

Once everything finishes we get the ip address, the passwords, and the ssh keys that were used to create this virtual machine.

We are able to open the link to the Tomcat server by clicking on the Go To The Application on the top right of the screen. This allows us to see the splash screen as well as access the management console.

When you click on the Access my Application you get the detailed information about the Tomcat server. We can go to the management console and look at the configuration as well as bring the server up and down.

At this point we have a valid configuration that we can see across the internet. The whole process took 15 minutes and did not require any editing or configurations other than selecting the configuration and giving the virtual machine a name.

Open Source Machine Learning for Oracle Developers

Gerger Consulting - Mon, 2016-04-11 01:32
Attend the webinar by Christy Maver and Scott Purdy and learn how you can apply Numenta's open source machine intelligence technology to fraud detection, anomaly detection, IT monitoring, geospatial data and more.

More than 110 people have already signed up. Register at this link.

Listen to Numenta's story from Jeff Hawkins:

Categories: Development

New A-Team Mobile Persistence Accelerator (AMPA) for Mobile Application Framework

The recent Oracle MAF 2.3 release already available on OTN, is a major update of MAF coming less than 6 months after the last major release. This release has several new & exciting features,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Monitoring ADF 12c Client Request Time with Click History

Andrejus Baranovski - Sun, 2016-04-10 12:55
You must be excited to read this post, I will describe one very useful feature, available in ADF 12c. This feature - Click History. You can follow steps described by Duncan Mills, to enable and read click history data in server log. There is one more option available - we can read click history data from ADF request (captured by filter class) and log it in custom way (for later analysis). Click history gives such information as client request start/end time (you can calculate client request duration), component client ID, component type, action event type, etc. All this is useful to understand application performance perceived by the client.

Download sample application - DashboardApp.zip (it contains JET libraries, to render JET content within ADF). To test click history and view generated statistics, click on any button and selection component. Click history returns previous action statistics. Let's click on Cancel button:


You should see similar XML message printed in the log (click history statistics data):


To view it in readable way, you can copy it into XML file and format in JDEV:


Logged properties, useful to track client request performance:

1. CID - ECID number, identifies request
2. CST - client request start time
3. CET - client request end time
4. ETY - client event type
5. CLD - client component ID
6. CTY - client component type

XML message can be parsed and property values can be logged to DB for performance history analysis.

To enable click history, you only need to set parameter (true) in web.xml oracle.adf.view.faces.context.ENABLE_ADF_EXECUTION_CONTEXT_PROVIDER:


To read click history data from ADF request, we need to define custom filter in web.xml. Register filter under Faces Servlet:


In the filter, override doFilter and retrieve oracle.adf.view.rich.monitoring.UserActivityInfo parameter (this will return click history XML string, as above):

FBDA -- 6 : Some Bug Notes

Hemant K Chitale - Sun, 2016-04-10 10:27
Some MoS documents on FBDA Bugs

1.  Bug 16454223  :  Wrong Results  (more rows than expected)

2.  Bug 16898135  :  FBDA does not split partitions  (resulting in rows not being purged)

3.  Bug 18294320  :   ORA-01555 (ORA-2475) on SMON_SCN_TIME

4.  Bug 22456983  :   Limit on SMON_SCN_TIME affecting FBDA

5.  Document 2039070.1 :  Known Issues with Flashback Data Archive
.
.
.




Categories: DBA Blogs

FBDA -- 5 : Testing AutoPurging

Hemant K Chitale - Sun, 2016-04-10 10:06
Tracking data changes after one row added (ID_COLUMN=2000) on 06-Apr

SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
06-APR-16 10.53.20.328132 PM +08:00

SQL> select scn_to_timestamp(startscn), scn_to_timestamp(endscn), count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4 order by 1,2;

SCN_TO_TIMESTAMP(STARTSCN)
---------------------------------------------------------------------------
SCN_TO_TIMESTAMP(ENDSCN)
---------------------------------------------------------------------------
COUNT(*)
----------
02-APR-16 11.32.55.000000000 PM
02-APR-16 11.46.11.000000000 PM
450

02-APR-16 11.32.55.000000000 PM
03-APR-16 11.45.24.000000000 PM
550

02-APR-16 11.46.11.000000000 PM
03-APR-16 11.41.33.000000000 PM
5

02-APR-16 11.46.11.000000000 PM
03-APR-16 11.45.24.000000000 PM
445

03-APR-16 11.41.33.000000000 PM
03-APR-16 11.45.24.000000000 PM
5

03-APR-16 11.45.24.000000000 PM
04-APR-16 11.05.33.000000000 PM
1000

06-APR-16 10.40.43.000000000 PM
06-APR-16 10.42.54.000000000 PM
1


7 rows selected.

SQL>
SQL> select count(*) from sys_fba_tcrv_93250;

COUNT(*)
----------
1002

SQL>


More changes on 07-Apr


SQL> insert into test_fbda
2 select 3000, to_char(3000), trunc(sysdate)
3 from dual;

1 row created.

SQL> commit;

Commit complete.

SQL> update test_fbda
2 set date_inserted=date_inserted
3 where id_column=3000;

1 row updated.

SQL> delete test_fbda
2 where id_column < 1001 ;

1000 rows deleted.

SQL> commit;

Commit complete.

SQL>
SQL> l
1 select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4* order by 1,2,3
SQL> /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000

10 rows selected.

SQL>
SQL> l
1 select id_column, trunc(date_inserted), count(*)
2 from test_fbda
3 group by id_column, trunc(date_inserted)
4* order by 1
SQL> /

ID_COLUMN TRUNC(DAT COUNT(*)
---------- --------- ----------
2000 06-APR-16 1
3000 07-APR-16 1

SQL>


I see two new 1000 row sets (04-Apr and 07-Apr).  I should expect only one.

Now that rows for ID_COLUMN less than 1001 have been deleted on 07-Apr, we have to see if and when they get purged from the History table.


On 09-Apr:

SQL> insert into test_fbda
2 select 4000, to_char(4000),trunc(sysdate)
3 from dual;

1 row created.

SQL> commit;

Commit complete.

SQL> update test_fbda
2 set date_inserted=date_inserted
3 where id_column=4000;

1 row updated.

SQL> commit;

Commit complete.

SQL> l
1 select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4* order by 1,2,3
SQL> /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL>
SQL> select * from user_flashback_archive
2 /

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
1 3
02-APR-16 11.24.39.000000000 PM
02-APR-16 11.24.39.000000000 PM



SQL>


As on the morning of 10-Apr (after leaving the database instance running overnight) :

SQL> select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4 order by 1,2,3
5 /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL> select systimestamp from dual
2 /

SYSTIMESTAMP
---------------------------------------------------------------------------
10-APR-16 08.51.29.398107 AM +08:00

SQL>
SQL> select * from user_flashback_archive
2 /

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
1 3
02-APR-16 11.24.39.000000000 PM
02-APR-16 11.24.39.000000000 PM



SQL>


So auto-purge of the data as of earlier days (02-Apr to 06-Apr) hasn't yet kicked in ?  Let's try a manual purge.

SQL> alter flashback archive fbda purge before timestamp (sysdate-4);

Flashback archive altered.

SQL> select * from user_flashback_archive;

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
--------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
------------------ -----------------
CREATE_TIME
---------------------------------------------------------------------------
LAST_PURGE_TIME
---------------------------------------------------------------------------
STATUS
-------
SYSTEM
FBDA
1 3

05-APR-16 11.52.16.000000000 PM



SQL>
SQL> ! sleep 300
SQL> l
1 select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4* order by 1,2,3
SQL> /

STARTTIME ENDTIME COUNT(*)
-------------------------------- -------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL>


Although USER_FLASHBACK_ARCHIVE shows that a purge till 05-Apr (the 11:52pm timestamp is strange) has been done, I still see older rows in the History table.  The query on the active table does correctly exclude the rows that should not be available. 


SQL> select scn_to_timestamp(startscn) starttime, scn_to_timestamp(endscn) endtime, count(*)
2 from sys_fba_hist_93250
3 group by scn_to_timestamp(startscn), scn_to_timestamp(endscn)
4 order by 1,2,3;

STARTTIME ENDTIME COUNT(*)
---------------------------------- ---------------------------------- ----------
02-APR-16 11.32.55.000000000 PM 02-APR-16 11.46.11.000000000 PM 450
02-APR-16 11.32.55.000000000 PM 03-APR-16 11.45.24.000000000 PM 550
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.41.33.000000000 PM 5
02-APR-16 11.46.11.000000000 PM 03-APR-16 11.45.24.000000000 PM 445
03-APR-16 11.41.33.000000000 PM 03-APR-16 11.45.24.000000000 PM 5
03-APR-16 11.45.24.000000000 PM 04-APR-16 11.05.33.000000000 PM 1000
04-APR-16 11.09.43.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
06-APR-16 10.40.43.000000000 PM 06-APR-16 10.42.54.000000000 PM 1
07-APR-16 10.27.35.000000000 PM 07-APR-16 10.28.03.000000000 PM 1
07-APR-16 10.28.03.000000000 PM 07-APR-16 10.28.03.000000000 PM 1000
09-APR-16 11.10.25.000000000 PM 09-APR-16 11.10.48.000000000 PM 1

11 rows selected.

SQL> select * from user_flashback_archive;

OWNER_NAME
------------------------------
FLASHBACK_ARCHIVE_NAME
------------------------------------------------------------------------------------------------------------------------------------
FLASHBACK_ARCHIVE# RETENTION_IN_DAYS CREATE_TIME
------------------ ----------------- ---------------------------------------------------------------------------
LAST_PURGE_TIME STATUS
--------------------------------------------------------------------------- -------
SYSTEM
FBDA
1 3
05-APR-16 11.52.16.000000000 PM


SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
10-APR-16 10.52.12.361412 PM +08:00

SQL> select count(*) from test_fbda as of timestamp (sysdate-3);

COUNT(*)
----------
2

SQL>
SQL> select partition_position, high_value
2 from user_tab_partitions
3 where table_name = 'SYS_FBA_HIST_93250'
4 order by 1;

PARTITION_POSITION HIGH_VALUE
------------------ --------------------------------------------------------------------------------
1 MAXVALUE

SQL>



Support Document 16898135.1 states that if Oracle isn't maintaining partitions for the History table, it may not be purging data properly.  Even an ALTER FLASHBACK ARCHIVE ... PURGE doesn't purge data (unless PURGE ALL is issued).  I'd seen this behaviour in 11.2.0.4 . The bug is supposed to have been fixed in 12.1.0.2  but my 12.1.0.2 environment shows the same behaviour.   The fact that my test database has very little activity (very few SCNs being incremented) shouldn't matter. The "Workaround" is, of course, unacceptable.
.
.
.
Categories: DBA Blogs

New navigation feature in Cloudera docs: Categories

Tahiti Views - Sun, 2016-04-10 02:54
If you've visited the Cloudera documentation lately, you might have noticed some new links down at the bottom of each page. The doc team implemented a system of Wiki-style categories, covering various themes for each page: Component names Audience Tasks Features And more aspects of interest to readers Let's take an example from the Impala docs: You start by viewing any page related to John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

Update to "Getting Started with Impala" in the Pipeline

Tahiti Views - Sun, 2016-04-10 02:38
Sometime soon, there will be an update to the first edition of the "Getting Started with Impala" book from O'Reilly. It'll have about 30 new pages covering new features such as analytic functions, subqueries, incremental statistics, and complex types. This is where the e-version from O'Reilly proves its worth, because existing owners will get a free update. O'Reilly store page Amazon page John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

Video : SQL/XML (SQLX) : Generating XML using SQL in Oracle

Tim Hall - Sat, 2016-04-09 10:16

Another video fresh off the press.

If videos aren’t your thing, you can always read the article the video is based on.

The star of this video is Kevin Closson. Kevin’s a really nice guy and has a brain the size of a planet, but you know somewhere in the back of his mind he’s wondering what it would be like to hunt you down, kill you and mount your head above his fireplace.

Create GoldenGate 12.2 Database User

Michael Dinh - Sat, 2016-04-09 09:33

Oracle GoldenGate for Windows and UNIX 12c (12.2.0.1)

First, I am disappointed that Oracle does not go above and beyond to provide SQL scripts to create GoldenGate users for the database.

There are different set of privileges depending on the version of the database:

4.1.4.2 Oracle 11.2.0.3 or Earlier Database Privileges
4.1.4.1 Oracle 11.2.0.4 or Later Database Privileges

PDB is not being used and it’s different for PDB.

Depending on whether you want to practice the least principle privileges, ggadmin user can be create with privileges for both extract (capture) and replicat (apply).

Please don’t forget to change the password from the script since it is hard coded to be same as username :=)

cr_ggadmin_12c.sql
-- 4.1.4.1 Oracle 11.2.0.4 or Later Database Privileges
set echo on lines 200 pages 1000 trimspool on tab off
define _username='GGADMIN'
-- grant privileges for capture
create user &_username identified by &_username default tablespace ggdata;
select DEFAULT_TABLESPACE,TEMPORARY_TABLESPACE from dba_users where username='&_username';
grant create session, connect, resource, alter any table, alter system, dba, select any transaction to &_username;
-- grant privileges for replicat
grant create table, lock any table to &_username;
-- grant both capture and apply
exec dbms_goldengate_auth.grant_admin_privilege('&_username')
-- grant capture
-- exec dbms_goldengate_auth.grant_admin_privilege('&_username','capture');
-- grant apply
-- exec dbms_goldengate_auth.grant_admin_privilege('&_username','apply');

Demo:

oracle@arrow:tiger:/media/sf_working/ggs
$ sysdba @cr_ggadmin_12c.sql

SQL*Plus: Release 11.2.0.4.0 Production on Sat Apr 9 07:06:41 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@tiger):PRIMARY> define _username='GGADMIN'
ARROW:(SYS@tiger):PRIMARY> -- grant privileges for capture
ARROW:(SYS@tiger):PRIMARY> create user &_username identified by &_username default tablespace ggdata;

User created.

ARROW:(SYS@tiger):PRIMARY> select DEFAULT_TABLESPACE,TEMPORARY_TABLESPACE from dba_users where username='&_username';

DEFAULT_TABLESPACE             TEMPORARY_TABLESPACE
------------------------------ ------------------------------
GGDATA                         TEMP

ARROW:(SYS@tiger):PRIMARY> grant create session, connect, resource, alter any table, alter system, dba, select any transaction to &_username;

Grant succeeded.

ARROW:(SYS@tiger):PRIMARY> -- grant privileges for replicat
ARROW:(SYS@tiger):PRIMARY> grant create table, lock any table to &_username;

Grant succeeded.

ARROW:(SYS@tiger):PRIMARY> -- grant both capture and apply
ARROW:(SYS@tiger):PRIMARY> exec dbms_goldengate_auth.grant_admin_privilege('&_username')

PL/SQL procedure successfully completed.

ARROW:(SYS@tiger):PRIMARY> -- grant capture
ARROW:(SYS@tiger):PRIMARY> -- exec dbms_goldengate_auth.grant_admin_privilege('&_username','capture');
ARROW:(SYS@tiger):PRIMARY> -- grant apply
ARROW:(SYS@tiger):PRIMARY> -- exec dbms_goldengate_auth.grant_admin_privilege('&_username','apply');
ARROW:(SYS@tiger):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@arrow:tiger:/media/sf_working/ggs
$

Financial Apps Collaborate 16 Sessions

David Haimes - Fri, 2016-04-08 23:59

We’re just two days away from the start of Collaborate and there are so many session I want to get to, my focus is on Financial Applications both Cloud and E-Business Suite.  I already listed sessions where you can find me presenting, but here are ones I think will be interesting, I will attend as many as I can

Firstly two cloud customers(Alex Lee and Westmont Hotels) talking about their experiences implementing cloud financials.

Fusion Financials Implementation Customer Success Experience

Monday April 11th 12:45 PM–1:45 PM – South Seas J

Derrick Walters, Corporate Applications Manager at Alex Lee

How Westmont Hospitality Benefited by Leveraging Cloud ERP
3:15 PM–4:15 PMApr 11, 2016 – South Seas I
Sacha Agostini Oracle Functional Consultant at Vigilant Technologies, LLC.

Next some AGIS, Legal Entity and related topics on E-Business suite.  In these areas that have been out for some time, I generally learn something about innovative uses of the products.  Our partners and customers are very smart.

Intracompany, Intercompany, AGIS – Unraveling the Mysteries!
2:15 PM–3:15 PM Apr 10, 2016 – South Seas A
Bharati ManjeshwarMs at Highstreet IT Solutions, LLC

Master Data Series – Legal Entities From 11i thru Cloud
9:15 AM–10:15 AM Apr 12, 2016 – Breakers F
Thomas Simkiss Vice-President of Consulting at Denovo Ventures, LLC

Its Not too Late! How to Replace Your eBTax Solution After You Have Upgraded
10:30 AM–11:30 AM Apr 11, 2016 – South Seas I
Mr Andrew BohnetDirector ateBiz Answers Ltd

General Ledger and Financial Accounting Hub – A look at a Multi-National Structure
3:30 PM–4:30 PMApr 10, 2016 – Jasmine H
Bharati ManjeshwarMs at Highstreet IT Solutions, LLC

Finally, there are sessions called Power Hours, which are strangely two hours, but i really like the experience last year and based on the fact they are back i assume others did too.  they are not a traditional lecture format, they are more interactive and allow people to discuss their experiences and learn from each other.  If you have not tried one, I highly recommend them.  Here are a couple that jumped out at me
Power Hour – Coexistence – On Premise and Cloud Together and In Harmony
3:15 PM–5:30 PM Apr 11, 2016 – Mandalay Bay C
Mohan Iyer Practice Director at Jade Global, Inc.
Power Hour – eBTax Hacks – Your Questions Answered
9:15 AM–11:45 AM Apr 12, 2016 – Mandalay Bay C
Mr Andrew Bohnet Director at eBiz Answers Ltd
Alexander Fiteni President at Fiteni Enterprises Inc
Dev Singh Manager at KPMG LLP Canada
Power Hour – Master Data Structures in EBS and Cloud
12:45 PM–3:00 PM Apr 11, 2016 – Mandalay Bay C
Mohan Iyer Practice Director at Jade Global, Inc.

Categories: APPS Blogs

Cloud Integration Challenges and Opportunities

The rapid shift from on-premise applications to a hybrid mix of Software-as-a-Service (SaaS) and onpremise applications has introduced big challenges for companies attempting to simplify enterprise...

We share our skills to maximize your revenue!
Categories: DBA Blogs

April 2016 Updates to AD for EBS 12.2

Steven Chan - Fri, 2016-04-08 13:44

We have been fine-tuning the administration tools for E-Business Suite 12.2 via a series of regular updates to the Applications DBA (AD) and EBS Technology Stack (TXK) components:

We have now made available a ninth set of critical updates to AD (TXK is unchanged). If you are on Oracle E-Business Suite Release 12.2, we strongly recommend that you apply this new patch at your earliest convenience:



Refer to the following My Oracle Support knowledge document for full installation instructions and associated tasks:

What's New in this Patchset?

This patchset includes six important updates for the following issues:

  • 22700342 - ADOP doesn't allow cutover if standby database is detected
  • 22861407 - Increase the variable size for L_PATCH_SERVICE in AD_ZD_ADOP
  • 22777440 - Grant privileges only if required while splicing
  • 22664007 - ORA-04068 error during patching
  • 22503374 - Script to grant unlimited quota privileges on 'SYSTEM' tablespace
  • 22530842/22567833 - Cannot drop table partition in run edition when patch edition exists

Related Articles

Categories: APPS Blogs

Playing around with JSON using the APEX_JSON package

Tim Hall - Fri, 2016-04-08 13:26

hockey-149683_640We publish a number of XML web services from the database using the PL/SQL web toolkit, as described here. In more recent times we’ve had a number of requirements for JSON web services, so we did what most people probably do and Googled for “json pl/sql” and got a link to PL/JSON.

I know about the support for JSON in 12c, but we are not on 12c for these projects and that’s more about consuming JSON, rather than publishing it.

People seemed reasonably happy with PL/JSON, so I thought no more about it. At the weekend, kind-of by accident, I came across the APEX_JSON package that comes as part of APEX 5 and thought, how could I have missed that?

This is not a slight against PL/JSON, but given the choice of using something built and supported by Oracle, that is already in the database (we have APEX 5 in most databases already) or loading something else, I tend to pick the Oracle method. Since then I’ve been having a play with APEX_JSON and I quite like it. Here’s what I wrote while I was playing with it.

If you have done anything with XML in PL/SQL, you should find it pretty simple.

I’m guessing this post will result in a few people saying, “What about ORDS?” Yes I know. Because of history we are still mostly using mod_plsql and OHS, but ORDS is on the horizon. Even so, we will probably continue to use APEX_JSON to do the donkey-work, and just use ORDS to front it.

Cheers

Tim…

Playing around with JSON using the APEX_JSON package was first posted on April 8, 2016 at 7:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Cloud – Glassfish Heap Memory Issues

John Scott - Fri, 2016-04-08 10:35

I recently encountered some issues with our Oracle Cloud Glassfish server encountering out of memory issues.

This manifested itself by Glassfish becoming unrepsonsive and eventually crashing, digging into the logfile we found entries like this

server.log_2016-04-08T18-32-27:[#|2016-04-04T18:21:56.472+0000|SEVERE|oracle-glassfish3.1.2|null|_ThreadID=74;_ThreadName=Thread-2;|Java heap space
server.log_2016-04-08T18-32-27:java.lang.OutOfMemoryError: Java heap space

After a lot of Googling, I found that you can find out the current Java settings that are being used by Glassfish by running the list-jvm-options command, like this:

[oracle@prod bin]$ ./asadmin list-jvm-options
Enter admin user name> admin
Enter admin password for user "admin">
-XX:MaxPermSize=192m
-XX:PermSize=64m
-client
-Djava.awt.headless=true
-Djavax.management.builder.initial=com.sun.enterprise.v3.admin.AppServerMBeanServerBuilder
-XX: UnlockDiagnosticVMOptions
-Djava.endorsed.dirs=${com.sun.aas.installRoot}/modules/endorsed${path.separator}${com.sun.aas.installRoot}/lib/endorsed
-Djava.security.policy=${com.sun.aas.instanceRoot}/config/server.policy
-Djava.security.auth.login.config=${com.sun.aas.instanceRoot}/config/login.conf
-Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as
-Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks
-Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks
-Djava.ext.dirs=${com.sun.aas.javaRoot}/lib/ext${path.separator}${com.sun.aas.javaRoot}/jre/lib/ext${path.separator}${com.sun.aas.instanceRoot}/lib/ext
-Djdbc.drivers=org.apache.derby.jdbc.ClientDriver
-DANTLR_USE_DIRECT_CLASS_LOADING=true
-Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory
-Dosgi.shell.telnet.port=6666
-Dosgi.shell.telnet.maxconn=1
-Dosgi.shell.telnet.ip=127.0.0.1
-Dgosh.args=--nointeractive
-Dfelix.fileinstall.dir=${com.sun.aas.installRoot}/modules/autostart/
-Dfelix.fileinstall.poll=5000
-Dfelix.fileinstall.log.level=2
-Dfelix.fileinstall.bundles.new.start=true
-Dfelix.fileinstall.bundles.startTransient=true
-Dfelix.fileinstall.disableConfigSave=false
-XX:NewRatio=2
-Xmx128m
Command list-jvm-options executed successfully.

You can see a lot of info here, the Heap memory parameter is the Xmx one, which we can adjust by deleting the current setting:

./asadmin delete-jvm-options --target server-config -- '-Xmx128m'

and then assigning a new value

./asadmin delete-jvm-options --target server-config -- '-Xmx256m'

 

Then we restarted the Glassfish server and haven’t seen the issue occur since.
It’s important to not just blindly choose a value here, you need to understand why you’re running out of heap memory and not just increase it for the sake of it (but that’s a post for another day).

 


Pages

Subscribe to Oracle FAQ aggregator