Skip navigation.

Feed aggregator

Happy Birthday to oracle-base.com (sort-of)

Tim Hall - 2 hours 26 min ago

birthday-cake-clipartToday is another anniversary, but this time it’s the website, which is 15 years old.

OK. This is a bit of a cheat because:

  • The website originally had a different name, so you could say the website with it’s current name is 13 months younger, but it’s the same site, so whatever.
  • I don’t actually know the exact day the first page went online, but I do know the date I bought the original domain name (before the rename to oracle-base.com), so I know the first page was put up about now.

Anyway, July 3rd is from now on the official birthday of the website. Makes it easy to remember, because it’s the day after my birthday.

Cheers

Tim…

PS. For those that are interested, the blog was 10 years old last month. I do know the exact date for that because the posts are dated and you can read the first post. :)

Happy Birthday to oracle-base.com (sort-of) was first posted on July 3, 2015 at 6:35 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Seven Weeks with the Fitbit Surge

Oracle AppsLab - Thu, 2015-07-02 14:35

As my wearables odyssey continues, it’s time to document my time with the Fitbit Surge.

I ended up wearing the Surge a lot longer than I’d worn the Nike+ Fuelband, the Basis Peak and the Jawbone UP24 because June was a busy month, and I didn’t have time to switch.

For comparison’s sake, I suggest you read Ultan’s (@ultan) review of the Surge. He’s a hardcore fitness dude, and I’m much more a have-to-don’t-like-to exercise guy, which makes for a nice companion read.

As usual, this isn’t a review, more loosely-coupled observations. You can find lots of credible reviews of the Surge, billed as a “Super Watch” by the recently IPO’ed Fitbit, e.g. this one from Engadget.

Here we go.

The watch

As with most of the other wearables I’ve used, the Surge must be setup from software installed on a computer. It also requires the use of a weird USB doohickey for pairing, after which the watch firmware updates.

IMG_20150512_070824

I get why they provide ways for people to sync to software installed on computers, but I wonder how many users really eschew the smartphone app or don’t have a smartphone.

Anyway, despite Fitbit Connect, the software you have to install, saying the firmware update process will take five to ten minutes, my update took much longer, like 30 minutes.

Physically, the Surge is chunky. Its shape reminds me of a door-stop, like a wedge. While this looks weird, it’s really a nice design idea, essentially tilting the display toward the user, making it easier to read at a glance.

IMG_20150513_063914

I found wearing the device to be comfortable, although the rubber of the band did make my skin clammy after a while, see the Epilogue for more on that.

The display is easy to read in any light, and the backlight comes on automatically in low light conditions.

Surge carries water resistant rating of 5 ATM, which amounts to 50 meters deep, but for some reason, Fitbit advises against submerging it. Weird, right?

Not one to follow directions, I took the Surge in a pool with no ill effects. However, once or twice during my post-workout steam, the display did show some condensation under the glass. So, who knows?

The device interface is a combination of touches and three physical buttons, all easy to learn through quick experimentation.

The watch screens show the day’s activity in steps, calories burned, miles, and floors climbed. It also tracks heart rate via an optical heart rate sensor.

In addition, you can start specific activity tracking from the device including outdoor running with GPS tracking, which Ultan used quite a lot, and from what I’ve read, is the Surge’s money feature. I only run indoors on a treadmill (lame), so I didn’t test this feature.

The Surge does have a treadmill activity, but I found its mileage calculation varied from the treadmill’s, e.g. 3.30 miles on the treadmill equated to 2.54 on the Surge. Not a big deal to me, especially given how difficult tracking mileage would be for a device to get right through sensors.

Speaking of, the Surge packs a nice array of sensors. In addition to the aforementioned GPS and optical heart rate sensor, it also sports a 3-axis accelerometer and a 3-axis gyroscope.

The Surge tracks sleep automatically, although I’m not sure how. Seemed to be magically accurate though.

Fitbit advertises the Surge’s battery life as seven days, but in practice, I only got about four or five days per charge. Luckily, Fitbit will inform you when the battery gets low via app notifications and email, both of which are nice.

Happily, the battery charges very quickly, albeit via a proprietary charging cord. Lose that cord, and you’re toast. I misplaced mine, which effectively ended this experiment.

The app and data

As Ultan mentioned in his post, the Fitbit Aria wifi scale makes using any Fitbit device better. I’ve had an Aria for a few years, but never really used it. So, this was a great chance to try it with the Surge.

Fitbit provides both mobile and web apps to track data.

I mostly used the mobile app which shows a daily view of activity, weight and food consumption, if you choose to track that manually. Tapping any item shows you details, and you can swipe between days.

Screenshot_2015-07-02-13-01-58 Screenshot_2015-07-02-13-02-03 Screenshot_2015-07-02-13-02-17 Screenshot_2015-07-02-13-02-57 Screenshot_2015-07-02-13-02-45 Screenshot_2015-07-02-13-03-26

It’s all very well-done, easy to use, and they do a nice job of packing a lot information into a small screen.

From within the app, you can set up phone notifications for texts and calls, a feature I really liked from wearing the Basis Peak.

Noel, send me a text message.

Noel, send me a text message.

Unfortunately, I only got notified about half the time, not ideal, and I’m not the only one with this issue. Danny Bryant (@dbcapoeira) and I chatted about our Surge experiences at Kscope, and he mentioned this as an issue for him as well.

Fitibit offers Challenges to encourage social fitness competition, which seems nice, but not for me. There are badges for milestones too, like walking 500 miles, climbing 500 floors, etc. Nice.

Sleep tracking on the mobile app is pretty basic, showing number of times awake and number of times restless.

Fitbit’s web app is a dashboard showing the same information in a larger format. They hide some key insights in the Log section, e.g. the sleep data in there is more detailed than what the dashboard shows.

Fitbit Dashboard

Fitbit Dashboard

Track My Sleep on Fitbit

Fitbit Log

Track My Activities on Fitbit

Fitbit Log

I have to say I prefer the Jawbone approach to viewing data; they only have a mobile app which dictates the entire experience and keeps it focused.

Fitbit sends weekly summary emails too, so yet another way to view your data. I like the emails, especially the fun data point about my average time to fall asleep for the week, usually zero minutes. I guess this particular week I was well-rested.

fitbitEmail

I did have some time zone issues when I went to Florida. The watch didn’t update automatically, and I did some digging and found a help article about traveling with your Fitbit with this tip:

Loss of data can occur if the “Set Automatically” timezone option in the app’s “Settings” is on. Toggle the “Set Automatically” timezone option to off.

So for the entire week in Hollywood, my watch was three hours slow, not a good look for a watch.

And finally, data export out of Fitbit’s ecosystem is available, at a cost. Export is a premium feature. “Your data belongs to you!” for for $50 a year. Some consolation though, they offer a free trial for a week, so I grabbed my data for free, at least this time.

Overall, the Surge compares favorably to the Basis Peak, but unlike the Jawbone UP24, I didn’t feel sad when the experiment ended.

Epilogue

Perhaps you’ll recall that Fitbit’s newer devices have been causing rashes for some users. I’m one of those users. I’m reporting this because it happened, not as an indictment of the device.

I wore the Surge for seven weeks, pretty much all the time. When I took it off to end the experiment, my wife noticed a nasty red spot on the outer side of my arm. I hadn’t seen it, and I probably would never have noticed.

IMG_20150629_131631 IMG_20150629_131702

It doesn’t itch or anything, just looks gnarly. After two days, it seems to be resolving, no harm, no foul.

The rash doesn’t really affect how I view the device, although if I wear the Surge again, I’ll remember to give my skin a break periodically.

One unexpected side effect of not wearing a device as the rash clears up is that unquantified days feel weird. I wonder why I do things if they’re not being quantified. Being healthy for its own sake isn’t enough. I need that extra dopamine from achieving something quantifiable.

Strange, right?

Find the comments.Possibly Related Posts:

Oracle Priority Support Infogram for 02-JUL-2015

Oracle Infogram - Thu, 2015-07-02 13:53

Oracle Support
Two good items from the My Oracle Support blog:
Three Scenarios for Using Support Identifier Groups
Stay Up to Date with Key My Oracle Support Resources of Your Choice using Hot Topics.
A Guide to Providing a Good Problem Description When Raising Service Requests, from the Communications Industry Support Blog.
MySQL
MySQL Enterprise Monitor 3.0.22 has been released, from the MySQL Enterprise Tools Blog.
Big Data
Identifying Influencers with the Built-in Page Rank Analytics in Oracle Big Data Spatial and Graph, from Adding Location and Graph Analysis to Big Data.
WebLogic
Additional new material WebLogic Community, from WebLogic Partner Community EMEA.
Improve SSL Support for Your WebLogic Domains, from Proactive Support - Identity Management.
Fusion Middleware
Calling Fusion SOAP Services from Ruby, from Angelo Santagata's Blog.
JDBC
Using Universal Connection Pooling (UCP) with JBoss AS, from JDBC Middleware.
OBIEE
OBIEE SampleApp V506 is Available, from Business Analytics - Proactive Support.
Ops Center
Upgrading to 12.3, from the Ops Center blog.
Identity Management
Configuring OAM SSO for ATG BCC and Endeca XM, from Proactive Support - Identity Management.
And from the same blog: Monitoring OAM Environment
Health Sciences
Health Sciences Partner Support Best Practices & Resources, from Chris Warticki's Support Blog.
Primavera
New Primavera P6 Release 15.1 Patch Set 2, from the Primavera Support Blog.
EBS
From the Oracle E-Business Suite Support blog:
Self-Evaluation for High Volume Order Import (HVOP)
General Ledger Balances Corruption - Causes, Suggestions, Solutions
Revaluation in Fixed Assets
iSetup? Use It Even Inside The Same Instance!
Using Translation and Plan to Upgrade? Don't Miss This!

Continue Your Work In Process!

Query existing HBase tables with SQL using Apache Phoenix

Kubilay Çilkara - Thu, 2015-07-02 13:25
Spending a bit more time with Apache Phoenix and reading again my previous post I realised that you can use it to query existing HBase tables. That is NOT tables created using Apache Phoenix, but HBase - the columnar NoSQL database in Hadoop.

I think this is cool as it gives you the ability to use SQL on an HBase table.

To test this, let's say you login to HBase and you create an HBase table like this:

> create 'table2', {NAME=>'cf1', VERSIONS => 5}

The table2 is a simple table in HBase with one column family cf1 and now let's put some data to this HBase table.

> put 'table2', 'row1', 'cf1:column1', 'Hello SQL!'

then maybe add another row

> put 'table2', 'row4', 'cf1:column1', 'London'

Now, in Phoenix all you will have to do is create a database View for this table and query it with SQL. The database View will be read-only.  How cool is that, you don't even need to physically create the table or move the data to Phoenix or convert it, a database view will be sufficient and via Phoenix you can query the HBase table with SQL.

In Phoenix you create the view for the table2 using the same name. As you can see below the DDL used to create the view is case sensitive and if you created your HBase table name in lower case you will have to put the name in between double quotes.

So login to Phoenix and create the "table2" view like this:

> create view "table2" ( pk VARCHAR PRIMARY KEY, "cf1"."column1" VARCHAR );

And here is how you then query it in Phoenix:


SQL Query on Phoenix
Tremendous potential here, imagine all those existing HBase tables which now you can query with SQL. More, you can point your Business Intelligence tools and Reporting Tools and other tools which work with SQL and query HBase as if it was another SQL database.

A solution worth investigating further? It definitely got me blogging in the evenings again.

To find out more about Apache Phoenix visit their project page https://phoenix.apache.org/



Categories: DBA Blogs

OSB & MTOM: When to use Include Binary Data by Reference or Value

Darwin IT - Thu, 2015-07-02 09:23
As can be seen in my blogs of these days, I've been busy with implementing a service using MTOM in OSB. When enabling XOP/MTOM Support you'll have to choose between:
  • Include Binary Data by Reference
  • Include Binary Data by Value

I used the first because I want to process the content in another service on another WLS-Domain. However, in my first service catching the initial request I want to do an XSD validation. And although the rest of the message is valid, the Validate activity raises an exception with the message: 'Element not allowed: binary-content@http://www.bea.com/wli/sb/context in element Bestandsdata....'.

Looking into this problem I came up with this section in the doc,  which states that you use 'Include Binary Data by Value' when you want to:
  • transfer your data to a service that does not support MTOM
  • validate your message
Now, what does this other option? OSB then parses the root of the Inbound MIME message in search for  xop:Include-tags. When found, it will Base64 encode the binary-content and replaces the tags with the Base64-string.

Now, although I want exactly that in the end, I don't want that at this point of the service. I want to transform my message, without the Base64-strings. And I want to encode the data only on my other domain.

So I just want to ignore messages that start with the 'Element not allowed: binary-content@...' messages. To do so I came up with the next expression:
fn:count($fault/ctx:details/con:ValidationFailureDetail/con:message[not(fn:starts-with(text(),'Element not allowed: binary-content@http://www.bea.com/wli/sb/context in element Bestandsdata'))])>0 
Add an If-Then-Else activity to your Error Handler Stage with this expression. Add the following Namespace:
  • Prefix: con
  • Namespace:  http://www.bea.com/wli/sb/stages/transform/config

If the expression evaluates to true, then you have in fact an invalid XML-message. In the else branch you can add a Resume to ignore the exception.

This expression might come in handy in other situations as well.

D2L Again Misusing Academic Data For Brightspace Marketing Claims

Michael Feldstein - Thu, 2015-07-02 05:56

By Phil HillMore Posts (333)

At this point I’d say that we have established a pattern of behavior.

Michael and I have been quite critical of D2L and their pattern of marketing behavior that is misleading and harmful to the ed tech community. Michael put it best:

I can’t remember the last time I read one of D2L’s announcements without rolling my eyes. I used to have respect for the company, but now I have to make a conscious effort not to dismiss any of their pronouncements out-of-hand. Not because I think it’s impossible that they might be doing good work, but because they force me to dive into a mountain of horseshit in the hopes of finding a nugget of gold at the bottom. Every. Single. Time. I’m not sure how much of the problem is that they have decided that they need to be disingenuous because they are under threat from Instructure or under pressure from investors and how much of it is that they are genuinely deluding themselves. Sadly, there have been some signs that at least part of the problem is the latter situation, which is a lot harder to fix. But there is also a fundamental dishonesty in the way that these statistics have been presented.

Well, here’s the latest. John Baker put out a blog called This Isn’t Your Dad’s Distance Learning Program with this theme:

But rather than talking about products, I think it’s important to talk about principles. I believe that if we’re going to use education technology to close the attainment gap, it has to deliver results. That — as pragmatic as it is — is the main guiding principle.

The link about “deliver results” leads to this page (excerpted as it existed prior to June 30th, for reasons that will become apparent).

Why Brightspace

Why Brightspace? Results.

So the stage is set – use ed tech to delivery results, and Brightspace (D2L’s learning platform, or LMS) delivers results. Now we come to the proof, including these two examples.

CSULB UWM Results

According to Californiat State University-Long Beach, retention has improved 6% year-over-year since they adopted Brightspace.[snip]

University of Wisconsin-Milwaukee reported an increase in the number of students getting A’s and B’s in Brightspace-powered courses by over 170%

Great results, no? Let’s check the sources. Ah . . . clever marketing folks – no supporting data or even hyperlinks to learn more. Let’s just accept their claims and move along.

. . .

OK, that was a joke.

CSU Long Beach

I contacted CSU Long Beach to learn more, but I could find no one who knew where this data came from or even that D2L was making this claim. I shared the links and context, and they went off to explore. Today I get a message saying that the issue has been resolved, but that CSU Long Beach would make no public statements on the matter. Fair enough – the observations below are my own.

If you now look at that Results page now, the CSU Long Beach claim is no longer there – down the memory hole[1] with no explanation, replaced by a new claim about Mohawk College.

Mohawk UWM Results

While CSU Long Beach would not comment further on the situation, there are only two plausible explanations for the issue being resolved by D2L taking down the data. Either D2L was using legitimate data that they were not authorized to use (best case scenario) or D2L was using data that doesn’t really exist. I could speculate further, but the onus should be on D2L since they are the ones who made the claim.

UW Milwaukee

I also contacted UW Milwaukee to learn more, and I believe the data in question is from the U-Pace program which has been fully documented.[2][3]

The U-Pace instructional approach combnes self-paced, master-based learning with instructor-initiated Amplified Assistance in an online environment.

The control group was traditionally-taught (read that as large lecture classes) for Intro to Psychology.

From the EDUCAUSE Quarterly article on U-Pace, for disadvantaged students the number of A’s and B’s increased 163%. This is the closest data I can find to back up D2L’s claim of 170% increase.

U-Pace results EQ

There are three immediate problems here (ignoring the fact that I can’t find improvements of more than 170% – I’ll take 163%).

  1. First, the data claim is missing the context of “for underprepared students” who exhibited much higher gains than prepared students. That’s a great result for the U-Pace program, but it is also important context to include.
  2. The program is an instructional change, moving from large lecture classes to self-paced, mastery-learning approach. That is the intervention, not the use of the LMS. In fact, D2L was the LMS used in both the control group and the U-Pace treatment group.
  3. The program goes out of its way to call out the minimal technology needed to adopt the approach, and they even list Blackboard, Desire2Learn, and Moodle as examples of LMS’s that work with the following conditions:

U-Pace LMS Reqs

This is an instructional approach that claims to be LMS neutral with D2L’s Brightspace used in both the control group and treatment group, yet D2L positions the results as proof that Brightspace gets results! It’s wonderful that Brightspace LMS worked during the test and did not get in the way, but that is a far cry from Brightspace “delivering results”.

The Pattern

We have to now add these two cases to the Lone Star College and LeaP examples. In all cases, there is a pattern.

  1. D2L makes marketing claim implying their LMS Brightspace delivers results, referring to academic outcomes data with missing supporting data or references.
  2. I contact school or research group to learn more.
  3. Data is either misleading (treatment group is not LMS usage but instead instructional approach, adaptive learning technology, or student support software) or just plain wrong (with data taken down).
  4. In all cases, the results could have been presented honestly, showing the appropriate context, links for further reading, and explanation of the LMS role. But they were not presented honestly.
  5. e-Literate blog post almost writes itself.
  6. D2L moves on to make their next claim, with no explanations.

I understand that other ed tech vendors make marketing claims that cannot always be tied to reality, but these examples cross a line. They misuse and misrepresent academic outcomes data – whether public research-based on internal research – and essentially take credit for their technology “delivering results”.

This is the misuse of someone else’s data for corporate gain. Institutional data. Student data. That is far different than using overly-positive descriptions of your own data or subjective observations. That is wrong.

The Offer

For D2L company officials, I have an offer.

  1. If you have answers or even corrections about these issues, please let us know through your own blog post or comments to this blog.
  2. If you find any mistakes in my analysis, I will write a correction post.
  3. We are happy to publish any reply you make here on e-Literate.
  1. Their web page does not allow archiving with the Wayback Machine, but I captured screenshots in anticipation of this move.
  2. Note – While I assume this claim derives from U-Pace, I am not sure. It is the closest example of real data that I could find, thanks to a helpful tip from UW-M staff. I’ll give D2L the benefit of the doubt despite their lack of reference.
  3. And really, D2L marketing staff should learn how to link to external sources. It’s good Internet practice.

The post D2L Again Misusing Academic Data For Brightspace Marketing Claims appeared first on e-Literate.

Set environment properties in SoapUI (freeware)

Darwin IT - Thu, 2015-07-02 04:26
Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn't use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project. But what if you want to test against multiple services or have to call a service on one system with the result of the other during a testcase. You can even have testcases that mock services called by your (BPEL/BPM) process and call back the process to have it process to a further stage. And then you can end up having multiple endpoints per environment.

You can set multiple endpoints on a request and toggle between them. But you'll have to do that for every request.

SoapUI however, supports the use of properties in the endpoints. So you can setup different host-properties and URI properties on the project:
In this case you see that I have one property for the Service URI, the part of the URL after the host:port, and several ...Host properties for each seperate environment, and one actual.

As said, you can have a property based endpoint like this:
So I have one single endpoint defined based on:
http://${#Project#CSServiceHost}/${#Project#CSServiceURI}
Here you see that the endpoint is based on two properties: ${#Project#CSServiceHost} and ${#Project#CSServiceURI}. In those properties '#Project#' refers to the level in SoapUI the properties are defined. You can also refer to #TestSuite#, #TestCase#, etc.

Now you could manually copy and paste the host of the particular environment to the actual host property, but that can be error prone when dealing with multiple endpoints.
What I did was to create a seperate TestSuite called 'ProjectSettings'. In there I created a testcase per environment: 'SetLocalHosts', SsetDevHosts', etc. In there I created a PropertyTransfer that transfers the particular env-host-property to the actual host-property:

You can create a property transfer for each applicable host in your environment. You can enhance the testcase with particular groovyscripts to determine the properties on run-time. You could even call a generic TestCase from there.

Running the particular testcase before your tests will setup your SoapUI project for the target environment in one go.

Maybe I'll enhance this further in my projects, but for now I find this neat. However, it would have been nice if SoapUI would support different environments with hostnames/urls applicable for that environment. And that you could select a target-environment on project level using a poplist.
Also it would be nice to have custom scripts (like macro's) on project level, that could be coupled to a button in the button bar, in stead of how I do it above.

Streamline Oracle Development with Cloud Services

Angelo Santagata - Thu, 2015-07-02 04:24
Streamline Java Development with Cloud Services
On-Demand Webinar Replay
Learn to deliver java applications to market faster. Reduce hardware and software costs for new development and testing environments. Improve DevOps efficiency. Build, test and run enterprise-grade applications in the Cloud and on premise.

Listen to this webinar replay with development expert James Governor, co-founder of RedMonk, and Daniel Pahng, President and CEO of mFrontiers, LLC, an ISV with hands-on experience developing enterprise mobility and Internet of Things (IOT) solutions, as they present this webcast on developing applications in the cloud. Listen today! For more information: July 2015                                                                                          Oracle Corporation - All rights reserved

Table Recovery in #Oracle 12c

The Oracle Instructor - Thu, 2015-07-02 03:18

You can now restore single tables from backup! It is a simple command although it leads to much effort by RMAN. See it as an enhancement over a ‘normal’ Point In Time Recovery:

Point In Time Recovery

Point In Time Recovery

After a full restore from a sufficiently old backup, archived logs are being applied in direction of the presence until before the logical error. Then a new incarnation comes up (with RESETLOGS) and the whole database is as it was at that time. But what if it is only a dropped table that needs to be recovered? Enter the 12c New Feature:

Table Recovery

Table Recovery

Above is what RMAN does upon Table Recovery. The restore is done to the auxiliary destination, while the database keeps on running like it is just now. The new incarnation is there only temporarily, just to export the dropped table from. Afterwards, it is removed. RMAN will then import the table back to the still running database – unless you say otherwise with the NOTABLEIMPORT clause. So it is a huge effort to go through for the system in spite of the simple RMAN command:

 

SQL> select count(*) from sales;

  COUNT(*)
----------
  10000000

SQL> select sysdate from dual;

SYSDATE
-------------------
2015-07-02 09:33:37

SQL> drop table sales purge;

Table dropped.

Oops – that was a mistake! And I can’t simply say flashback table sales to before drop because of the purge. RMAN to the rescue!

[oracle@uhesse ~]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 2 09:34:35 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2113606181)

RMAN> list backup of database;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
1       Full    142.13M    DISK        00:01:45     2015-07-01 17:50:32
        BP Key: 1   Status: AVAILABLE  Compressed: YES  Tag: TAG20150701T174847
        Piece Name: /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
  List of Datafiles in backup set 1
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  1       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/system01.dbf
  2       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/sysaux01.dbf
  3       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/undotbs01.dbf
  4       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/users01.dbf

RMAN> host 'mkdir /tmp/auxi';

host command complete

RMAN> recover table adam.sales until time '2015-07-02 09:33:00' auxiliary destination '/tmp/auxi';

Starting recover at 2015-07-02 09:35:54
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='tDtf'

initialization parameters used for automatic instance:
db_name=PRIMA
db_unique_name=tDtf_pitr_PRIMA
compatible=12.1.0.2
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=1512M
processes=200
db_create_file_dest=/tmp/auxi
log_archive_dest_1='location=/tmp/auxi'
#No auxiliary parameter file used


starting up automatic instance PRIMA

Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                402656944 bytes
Database Buffers            1174405120 bytes
Redo Buffers                  13848576 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';

# archive current online log
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET until clause

Starting restore at 2015-07-02 09:36:21
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=3 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_ncsnf_TAG20150701T174847_bs832pht_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_ncsnf_TAG20150701T174847_bs832pht_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl
Finished restore at 2015-07-02 09:36:23

sql statement: alter database mount clone database

sql statement: alter system archive log current

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /tmp/auxi/PRIMA/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 2015-07-02 09:36:32
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /tmp/auxi/PRIMA/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /tmp/auxi/PRIMA/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:36
Finished restore at 2015-07-02 09:37:08

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_system_bs9tj1fk_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_bs9tj1hw_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_sysaux_bs9tj1jd_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  2 online

Starting recover at 2015-07-02 09:37:09
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 13 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc
archived log for thread 1 with sequence 14 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc
archived log for thread 1 with sequence 19 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc
archived log for thread 1 with sequence 20 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc
archived log for thread 1 with sequence 21 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc
archived log for thread 1 with sequence 22 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc
archived log for thread 1 with sequence 23 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc
archived log for thread 1 with sequence 24 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc
archived log for thread 1 with sequence 25 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc
archived log for thread 1 with sequence 26 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc
archived log for thread 1 with sequence 27 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc
archived log for thread 1 with sequence 28 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc
archived log for thread 1 with sequence 29 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc
archived log for thread 1 with sequence 30 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc
archived log for thread 1 with sequence 32 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc
archived log for thread 1 with sequence 33 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc
archived log for thread 1 with sequence 34 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc
archived log for thread 1 with sequence 35 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc
archived log for thread 1 with sequence 36 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc
archived log for thread 1 with sequence 37 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc
archived log for thread 1 with sequence 38 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc
archived log for thread 1 with sequence 39 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc thread=1 sequence=13
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc thread=1 sequence=14
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc thread=1 sequence=15
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc thread=1 sequence=16
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc thread=1 sequence=17
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc thread=1 sequence=18
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc thread=1 sequence=19
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc thread=1 sequence=20
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc thread=1 sequence=21
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc thread=1 sequence=22
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc thread=1 sequence=23
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc thread=1 sequence=24
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc thread=1 sequence=25
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc thread=1 sequence=26
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc thread=1 sequence=27
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc thread=1 sequence=28
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc thread=1 sequence=29
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc thread=1 sequence=30
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc thread=1 sequence=31
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc thread=1 sequence=32
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc thread=1 sequence=33
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc thread=1 sequence=34
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc thread=1 sequence=35
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc thread=1 sequence=36
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc thread=1 sequence=37
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc thread=1 sequence=38
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc thread=1 sequence=39
media recovery complete, elapsed time: 00:01:00
Finished recover at 2015-07-02 09:38:11

sql statement: alter database open read only

contents of Memory Script:
{
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  control_files =
  ''/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl'' comment=
 ''RMAN set'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}
executing Memory Script

sql statement: create spfile from memory

database closed
database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                419434160 bytes
Database Buffers            1157627904 bytes
Redo Buffers                  13848576 bytes

sql statement: alter system set  control_files =   ''/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl'' comment= ''RMAN set'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                419434160 bytes
Database Buffers            1157627904 bytes
Redo Buffers                  13848576 bytes

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  4 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  4;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

Starting restore at 2015-07-02 09:39:11
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=12 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to /tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 2015-07-02 09:39:47

datafile 4 switched to datafile copy
input datafile copy RECID=8 STAMP=883993187 file name=/tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_bs9to0k1_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# online the datafiles restored or switched
sql clone "alter database datafile  4 online";
# recover and open resetlogs
recover clone database tablespace  "USERS", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  4 online

Starting recover at 2015-07-02 09:39:47
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 13 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc
archived log for thread 1 with sequence 14 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc
archived log for thread 1 with sequence 19 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc
archived log for thread 1 with sequence 20 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc
archived log for thread 1 with sequence 21 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc
archived log for thread 1 with sequence 22 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc
archived log for thread 1 with sequence 23 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc
archived log for thread 1 with sequence 24 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc
archived log for thread 1 with sequence 25 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc
archived log for thread 1 with sequence 26 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc
archived log for thread 1 with sequence 27 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc
archived log for thread 1 with sequence 28 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc
archived log for thread 1 with sequence 29 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc
archived log for thread 1 with sequence 30 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc
archived log for thread 1 with sequence 32 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc
archived log for thread 1 with sequence 33 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc
archived log for thread 1 with sequence 34 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc
archived log for thread 1 with sequence 35 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc
archived log for thread 1 with sequence 36 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc
archived log for thread 1 with sequence 37 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc
archived log for thread 1 with sequence 38 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc
archived log for thread 1 with sequence 39 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc thread=1 sequence=13
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc thread=1 sequence=14
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc thread=1 sequence=15
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc thread=1 sequence=16
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc thread=1 sequence=17
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc thread=1 sequence=18
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc thread=1 sequence=19
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc thread=1 sequence=20
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc thread=1 sequence=21
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc thread=1 sequence=22
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc thread=1 sequence=23
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc thread=1 sequence=24
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc thread=1 sequence=25
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc thread=1 sequence=26
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc thread=1 sequence=27
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc thread=1 sequence=28
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc thread=1 sequence=29
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc thread=1 sequence=30
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc thread=1 sequence=31
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc thread=1 sequence=32
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc thread=1 sequence=33
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc thread=1 sequence=34
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc thread=1 sequence=35
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc thread=1 sequence=36
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc thread=1 sequence=37
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc thread=1 sequence=38
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc thread=1 sequence=39
media recovery complete, elapsed time: 00:01:15
Finished recover at 2015-07-02 09:41:03

database opened

contents of Memory Script:
{
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxi''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxi''";
}
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxi''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxi''

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_tDtf_lwFD":
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 600 MB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "ADAM"."SALES"                              510.9 MB 10000000 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_tDtf_lwFD" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_tDtf_lwFD is:
   EXPDP>   /tmp/auxi/tspitr_tDtf_59906.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_tDtf_lwFD" successfully completed at Thu Jul 2 09:42:53 2015 elapsed 0 00:01:06
Export completed


contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_tDtf_uink" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_tDtf_uink":
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "ADAM"."SALES"                              510.9 MB 10000000 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_tDtf_uink" successfully completed at Thu Jul 2 09:54:13 2015 elapsed 0 00:11:12
Import completed


Removing automatic instance
Automatic instance removed
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_temp_bs9tm7pz_.tmp deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/onlinelog/o1_mf_2_bs9trods_.log deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/onlinelog/o1_mf_1_bs9trjw6_.log deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_bs9to0k1_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_sysaux_bs9tj1jd_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_bs9tj1hw_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_system_bs9tj1fk_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl deleted
auxiliary instance file tspitr_tDtf_59906.dmp deleted
Finished recover at 2015-07-02 09:54:16

See how much work was done by RMAN here? But now, life is good again:

SQL> select count(*) from adam.sales;

  COUNT(*)
----------
  10000000

You say that you could have done that yourself even before 12c? Yes, you’re right: It’s not magic, it’s just more comfortable now ;-)


Tagged: Backup & Recovery, PracticalGuide, RMAN
Categories: DBA Blogs

Introducing Formspider 1.9

Gerger Consulting - Thu, 2015-07-02 01:35
For the past year, we've been working hard on the new version of Formspider, the application development tool for Oracle PL/SQL Developers. Join our special virtual event on July 7th and become one of the first people who'll find out what we have in store for you.Whether you are an IT Manager trying to modernize your legacy software, an Oracle Forms Developer looking for a new development tool that is suitable to your skill set, a  PL/SQL Developer searching for a great way to build web applications or an APEX Developer who thinks that there must be a better solution, we'll have something for you.See you on July 7th.Kind Regards,
Yalim K. Gerger
Founder
Categories: Development

Happy Birthday to Me!

Tim Hall - Wed, 2015-07-01 23:40

birthday-cake-clipartHave you guessed what today is?

It’s amazing, finally reaching the age of 26 (+20).

Cheers

Tim…

PS. There’s another anniversary coming tomorrow. :)

Update: Just noticed this on Google.

google-birhtday

Happy Birthday to Me! was first posted on July 2, 2015 at 6:40 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Three Scenarios for Using Support Identifier Groups

Joshua Solomin - Wed, 2015-07-01 15:52
div#mainColumn { overflow:visible; }

Support Identifier Groups are a way to manage and organize hardware and software assets in the My Oracle Support (MOS) application. While many customers are already utilizing this feature, Oracle Portal Services has noticed there are still large swaths of customers who have not set up any SI groups, or who have set up SI groups but haven't added any assets to the groups to activate them.

We've put together some quick examples to help Customer User Administrators, or CUAs, set up their Oracle support assets more functionally and logically.

Watch the Video! Benefits of Support Identifier Groups (SIGs)
  • Simpler, easier management of your Support Identifiers, hardware, and software assets.
  • Logically organize by geography, asset, or role.
  • Establish defaults so that future hardware and software assets get automatically added to your chosen support identifier.
  • Improve service request (SR) visibility and simplify SR reporting.
  • Streamline access to relevant support information.
What's a Support Identifier?

If you're new to My Oracle Support, an SI is an automatically-generated record "tag" that links purchased Oracle hardware or software to support resources.

Large organizations might have dozens (or possibly hundreds) of SIs scattered across multiple lines of business and geographic areas. In order for a user to receive support on Oracle products—say a database admin or HR manager—they must be assigned to an active SI. An SI is "active" as long is it has 1) an asset assigned to it and 2) hasn't expired.

Setting up Groups

So how are SI groups different from a standard SI? From a functional standpoint they're identical; the difference is an SI "group" is one generated by a CUA, rather than one generated automatically by Oracle. Normally assets and users get assigned to whatever support identifier they happen to land in when a purchase is made. This can make it hard to keep track of where assets and assigned users reside—functionally, geographically, based on role, and so on.

By creating their own SI groups, CUAs can organize assets and users as they see fit.

To make the most of Support Identifier Groups, you will need to pre-plan how users and assets are best organized. Once defined you can set up your Groups, adding users and assets logically the way you need them.

Make a Plans Plan Steps Expanded SI Group

In this scenario a group of CUAs might want to reorganize their current SIs to reflect specific projects or lines of business.

When to Use

Keep in mind that assets can reside in more than one SI at a time. The idea behind this scenario is to group assets according to specific projects or operations. An asset might be used for more than one project at a time; the goal is to organize them to make it easier to track.

Expanded SI Consolidate SIs

In this scenario, the CUAs have a batch of SIs with assets assigned and scattered all over the place. They want to move the assets from their current SIs, and organize them into new SI groups consolidated by location.

When to Use

Location-based operations are obviously good candidates; grouping by location makes it easy to chart how and where assets are being used.

Consolidating SIs can also be useful if you have assets that are used exclusively by one group with little or no crossover between lines of business.

Note that when you choose to remove all active assets from a current SI, that SI gets deactivated automatically. Any users assigned to a deactivated SI would need to be moved to one of the new SI groupings.

Consolidated SI Consolidating with a Default SIG

This scenario is similar to the previous consolidation scenario; the main difference is that one of the new SI groups is set up as a default for all future purchases going forward.

Note that all new hardware or software assets are automatically be assigned to the default going forward.

When to Use

This scenario is useful when you have a specific set of assets and users that are logically segregated from other operations, and you want to keep them separate. Often this might include assets used for specific operations, while the "default" group is for the primary workflow.

Consolidated SI with Default Bottom Line

When planned and managed properly, SI groups can help reduce time spent managing Oracle assets. Visit Document 1569482.2 for more information.

BEEZY: Social Network for SharePoint 2013

Yann Neuhaus - Wed, 2015-07-01 13:32


Beezy-logo-M-25255B20-25255D


Social Networking.. Everybody is actually "connected": professional network, private social network... There is so many solutions around as of today. Which one should I use? What are the differences?
Regarding the use of  a social work, we have already seen YAMMER, what about BEEZY?

What is Beezy? what

Beezy is a social network built inside SharePoint.
Beezy comes in two flavors: on premises behind the firewall on SharePoint Server 2010 and in the cloud on Office365.


Beezy Features

 

  • Collaboration tools: by a click, sharing Files, events, tasks, images, video, links, is possible! Yes it is!
  • Groups: Beezy allows to create groups to structure corporate information, the setting up is user friendly and even if a group is shut down, information’ are kept.
  • Microblogging: this is a good way for collaboration, team spirit, you share ideas and get feedbacks in real-time. As with Twitter, you can use Tag like hashtags (#) and replies (@) and Embed videos from YouTube!
  • Follows: Knowledge management is also about effectively filtering information. Following, replying… users are notified when a change is made to anything they are following whether conversations or documents.
  • Profiles: A unique employee profile regrouping professional data, latest activity is available. You can also link your past activities with LinkedIn, and synchronize employee data with Active Directory.

Here is video link about Beezy: Beezy or Yammer? Beezy-logo-M-25255B20-25255D   and yammer

 

The biggest difference between both tools is the integration.
Beezy is SharePoint integrated whereas Yammer get only a link in a top menu and a web part that doesn’t accept uploading files in the microblog.

Beezy works within the SharePoint framework, all of your permissions, storage, compliance policies and procedures remain the same, unlike in a hybrid solution using Yammer/Office365 where the level of access is limited by comparison, requiring additional management overhead.


Only good User Experience drives real adoption
As we already seen in others articles, user experience only is capable to drive to a real adoption. More simple, fast and intuitive tools you will put in place, more your employees will jump in.

Collaboration
Conclusion


Beezy offers a complete integrated Collaboration tool in SharePoint 2013 / OFFICE 365, easily deploy in SharePoint Servers 2013 and easy to use.
In order to make the right choice, take time to analyze your business needs, try solutions with small groups, get feedbacks from users and then take a decision.


Source: www.beezy.net



OTN Virtual Technology Summit - Spotlight on Middleware Track

OTN TechBlog - Wed, 2015-07-01 09:00
OTN Virtual Technology Summit - Spotlight on Middleware Track It's All About WebLogic

The Middleware Track for the July 2015 edition of the Oracle Technology Network Virtual Technology Summit brings together three experts on Oracle Fusion Middleware to present how-to technical sessions on WebLogic Server's role in today's middleware architectures. The sessions in this track will focus on security and authentication, service monitoring and exploration, and on WebLogic 12c's new APIs and tools for application development. Other products and technologies covered in these sessions include Oracle SOA Suite, Service Bus, JMX, JAX-RS, JSON, WebSocket, and more.

Register Now: Middleware Track Sessions:

Debugging Weblogic Authentication
By Maarten Smeets, Senior Oracle SOA / ADF Developer, AMIS
Enterprises often centrally manage login information and group memberships (identity). Many systems use this information to achieve Single Sign On (SSO) functionality, for example. Surprisingly, access to the Weblogic Server Console is often not centrally managed. This session will explain why centralizing management of these identities not only increases security, but can also reduce operational cost and even increase developer productivity. The session will demonstrate several methods for debugging authentication using an external LDAP server in order to lower the bar to apply this pattern. This technically-oriented presentation will be especially useful for people working in operations who are responsible for managing Weblogic Servers.

Real-Time Service Monitoring and Exploration
By Oracle ACE Associate Robert van Molken , Senior Oracle Integration Specialist, AMIS
There is a great deal of value in knowing which services are deployed and correctly running on an Oracle SOA Suite or Service Bus instance. This session will explain and demonstrate how to retrieve this data using JMX and the available Managed Beans on Weblogic. You will learn how the data can be retrieved using existing Java APIs, and how to explore dependencies between Service Bus and SOA Suite. You'll also learn how the retrieved data can be used to create a simple dashboard or even detailed reports.

New APIs and Tools for Application Development in WebLogic 12c
By Shukie Ganguly,Senior Technology Architect, Oracle
WebLogic Server 12.1.3 provides support for innovative APIs and productive Tools for application development, including APIs for JAX-RS 2.0, JSON Processing (JSR 353), WebSocket (JSR 356), and JPA 2.1. This session will provide an overview of each of these APIs, and then demonstrate how you can use these capabilities to simplify the development of server applications accessed by "rich" clients using lightweight web-based protocols such as REST and WebSocket.

OTN Wants You!

Become a member of the OTN Community: Register here to start participating in our online community. Share your expertise with other community members!

NEW REWARDS! If you attend this virtual technology summit and are a member of the Oracle Technology Network Community you will earn 150 points towards our new Rewards and Recognition program (use the same email for both). Read all about it in our FAQ: Oracle Community - Rewards & Recognition FAQ.

A week with Apple Watch: From Cynic to Believer

David Haimes - Wed, 2015-07-01 08:50

I had convinced myself the Apple Watch was an overpriced fitness band and that it wasn’t for me and was set to get a Garmin to track my running instead.  Then out of the blue I was given an Apple Watch.  So you can certainly put me down as a cynic, but I certainly like to think I am open minded, so here are my thoughts after a week with the watch.

The experience of getting it set up was surprisingly frustrating, I had to upgrade my phone to iOS 8 before I could activate the watch and that meant deleting things to free a few Gb of memory (to upgrade my Operating System, really?).  So everything had to wait until after I got home and backed up my phone.

First I got this rather cool visual on my watch to scan with the phone and then it was paired and I got this screen telling me the model that I had bought.  OK so I still could not get the time from this watch and I have had the thing all day, I’m getting a little impatient at this point.

watch pairingwatch pairing 2

After waiting about 5 minutes for it to synch, suddenly a load of my apps, including my email, texts, calendar, twitter fitness apps and more are available on my watch.  This is about to get interesting.

The first thing I noticed is that it is actually really easy to ready and see at a glance the notifications that are sent to your watch, such as Calendar reminders, text messages and Oracle Social Network updates (glad to see we are quick to the new platform with our own mobile apps).  This is good for me, I get a lot of these alerts and I found a glance at my wrist was much nicer than pulling out my phone and unlocking it and starting at it.  This sounds like a very small thing, but it is these small improvements in frequent interactions that make for a great user experience.  I also agree with Jeremy Ashley about the huge value in being able to retain eye contact, notifications on my watch are far less obtrusive and the glance at my wrist it is a great experience.

So I wanted to try using it for some different things so I decided to test out text messages first, a quick SMS to respond to my wife’s text ‘ETA?’ to let her know what time I am planning to get home.

My wife and I prefer very efficient communications.

My wife and I prefer very efficient communications.

So I tap once on that nice Reply button

IMG_0027

I can now either pick from a set of pre-defined responses and they would be sent without any other interaction from me. However I like the personal touch, this is my wife after all, so I decide I will click on the microphone icon to dictate a response.  I speak in my answer and see the sound wave at the bottom and the text comes up perfectly first time.

IMG_0028

So now I click done and get a really option to either send the audio or to just tap on the text and send that.  This is a great feature if maybe the voice to text didn’t work properly and I don’t want to waste time correcting it or speaking it again.

IMG_0021

After tapping on the text I am now done.  The whole interaction was very fast and felt very natural.  At this point I am really starting to like the Apple Watch.  In the next few days I try driving directions, twitter, my calendar, a variety of fitness apps and more and pretty much across the board I find the interactions are natural and quick and the fact I have to pull out my phone less is a much bigger deal than I expected.  I find I can glance down at my watch see a text or meeting reminder and carry on a conversation in a way that was not really possible if I had to pull my phone out.  The one app I haven’t yet mentioned is the time, I haven’t worn a watch for over 10 years and I have realized in the last week it’s much easier to glance at my wrist than to pull out my phone – who knew?


Categories: APPS Blogs

Announcement: Singapore Oracle Sessions III

Doug Burns - Wed, 2015-07-01 08:01
Yes, it's that time again although I decided we should delay it a little while when I realised we could take advantage of the visit of Lucas Jellema to Singapore!

The date is set for 14th July so there's only a couple of weeks to go. Here is the agenda (SingaporeOracleSessionsIII.pdf) and a map (SOSMap.pdf) to help you get to the venue which is very handily placed near Bugis MRT. All that's required to register is to email me at dougburns at Yahoo.

Thanks to Hemant and Lucas for offering to present and to Vikki Lira of the OTN Oracle ACE team for agreeing to sponsor the event. As Hemant is an Oracle ACE and Lucas an Oracle ACE Director, the evening will have a truly ACE feel added to the usual Singapore vibe.

Can't wait!

P.S. Yes, I never did post a review of SOS II. That's how busy I've been lately :-(

Calling Fusion SOAP Services from Ruby

Angelo Santagata - Wed, 2015-07-01 07:55
Just completed some integration work with a partner of ours using the Ruby language. Given that a lot of startups like Ruby I thought it would be useful to cut-n-paste the sample code here. This example creates a simple (minimal) opportunity using the SOAP API in Sales Cloud. That said the code would be almost identical if you were querying HCM Data, The approach we took here was to prototype the SOAP call using SOAPUI and then cut-n-paste the SOAP payload into the data variable. In a real industrialized solution I'd create the payloads in template form.

  def create_opportunity
  # Change yourhostname.com to your Fusion SOAP Endpoint Hostname     uri = URI.parse("https://yourhostname.com/opptyMgmtOpportunities/OpportunityService")
    http = Net::HTTP.new(uri.host, uri.port)     http.use_ssl = true     path = uri.request_uri     http.read_timeout = 5     http.open_timeout = 5

  # Change authorization header to contain Base64encoded string of username/password     headers = {     'Content-Type' => 'text/xml',     'soapAction' => 'http://xmlns.oracle.com/apps/sales/opptyMgmt/opportunities/opportunityService/createOpportunity',     'authorization' => 'Basic bBase64EncodedCredentialHere='     }
       # Data Contains the payload     data = '<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://xmlns.oracle.com/apps/sales/opptyMgmt/opportunities/opportunityService/types/" xmlns:opp="http://xmlns.oracle.com/apps/sales/opptyMgmt/opportunities/opportunityService/" xmlns:rev="http://xmlns.oracle.com/apps/sales/opptyMgmt/revenues/revenueService/" xmlns:not="http://xmlns.oracle.com/apps/crmCommon/notes/noteService" xmlns:not1="http://xmlns.oracle.com/apps/crmCommon/notes/flex/noteDff/" xmlns:rev1="http://xmlns.oracle.com/oracle/apps/sales/opptyMgmt/revenues/revenueService/" xmlns:act="http://xmlns.oracle.com/apps/crmCommon/activities/activitiesService/">     <soapenv:Header/>     <soapenv:Body>     <typ:createOpportunity>     <typ:opportunity>     <opp:Name>Joel Test New1</opp:Name>     </typ:opportunity>     </typ:createOpportunity>     </soapenv:Body>     </soapenv:Envelope>'
    resp, data = http.post(path, data, headers)   end
!TIP!
A quick test in a SOAP testing tool like JDevelopers Http Analyzer or SOAPUI is a MUST before executing this!

OSB11g what is the scope of the for-each loop variable?

Darwin IT - Wed, 2015-07-01 07:36
Earlier I wrote an article on the strange behaviour of the OSB11g For-each activity (Osb-11g: for each is index-variable an integer?). Today I found out some other peculiar behaviour.

I had to loop over a sequence of  documents, each refering to an attachment. The first document processed well, but at the second iteration the attachment couldn't be found. At first I used Soap with Attachments and now I changed my service to use MTOM. Anyway, at first I thought it to be a SoapUI problem. But adding debug-alerts (I find alerts more comfortable then logs) showed me that the second attachment is really referred to in the message. So apparently, it's not a SoapUI problem.

What happened? Well, have the following for-each:
So I loop over a bunch of documents and at each iteration I'll get the particular document in the document variable. Then I get the attachment, transfer it into a base64-encoded-string, and replace the content with the base64-encoded-string, ... , in the $document variable!!!!! Is that wrong? Well, I expected that in the next iteration the $document variable would be replaced by the new occurrence. But apparently changing the for-each-variable will change the scope of the variable and makes it a 'normal' variable, instead of the loop variable. And since I replaced the contents, It did not have the reference to the next attachment. The quick fix I did was to copy the loop variable to a seperate variable using an assign and then do the changes on that next variable. Each new iteration the content of that variable is overwritten by the assing of the new iteration-variable.

But better is probably to use the index variable and do the changes directly on the main variable, in my case $documents.

So conclusion? Don't change the loop variable!

UPDATE: Oh, by the way, the reason that I change a seperate variable during the loop was that it apparently is not possible to do a Replace or Insert within a loop upon using the index variable. You'll get an error like: 'XQuery exception: line 34, column 11: {err}XP0008 [{bea-err}XP0008a]: Variable
 "$documentIndex" used but not declared for expression: declare namespace jca =
 'http://www.bea.com/wli/sb/transports/jca';... ' in the expression. So what I did was to copy the document-array from the body variable to a $documents variable, delete all the documents in the body variable (leaving the documents-element). Then loop over the $documents variable, create a new $documentNew variable from $document with the changed variables and insert that in the body variable again.

Oracle Database Vault 12c Paper by Pete Finnigan

Pete Finnigan - Wed, 2015-07-01 02:35

I wrote a paper about Oracle Database Vault in 12c for SANS last year and this was published in January 2015 by SANS on their website. I also prepared and did a webinar about this paper with SANS. The Paper....[Read More]

Posted by Pete On 30/06/15 At 05:38 PM

Categories: Security Blogs

Unique Oracle Security Trainings In York, England, September 2015

Pete Finnigan - Wed, 2015-07-01 02:35

I have just updated all of our Oracle Security training offerings on our company website. I have revamped all class pages and added two page pdf flyers for each of our four training classes. In have also updated the list....[Read More]

Posted by Pete On 25/06/15 At 04:36 PM

Categories: Security Blogs