Skip navigation.

Feed aggregator

<b>Contribution by Angela Golla,

Oracle Infogram - Mon, 2014-09-08 13:12
Contribution by Angela Golla, Infogram Deputy Editor

My Oracle Support Patch Conflict Checker Tool
A new My Oracle Support Conflict Checker tool is available from the My Oracle Support Patches & Updates Patch Search results page.

This tool enables you to upload an OPatch inventory and check the patches that you want to apply to your environment for conflicts. If no conflicts are found, you can download the patches. If conflicts are found, the tool finds an existing resolution to download. If no resolution is found, you can request a solution and monitor your request in the Plans region.  The details and a training video can be found in Note:1091294.1.

Apache Mesos and Marathon for UnifiedPush Server and WildFly

Matthias Wessendorf - Mon, 2014-09-08 10:55

After reading a bit about Apache Mesos I wanted to play a bit with it. If you don’t know what Mesos is, it’s a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks.

During reading up on Apache Mesos I ran into the Marathon framework, developed by the folks atMesosphere. Marathon is a nice tool to manage tasks on Apache Mesos. The Github repo says:

Marathon is an Apache Mesos framework for long-running applications. Given that you have Mesos running as the kernel for your datacenter, Marathon is the init or upstart daemon.

Installation of Apache Mesos

The folks at Mesosphere did a great job writing up different installation guides. As a Mac user, I did follow this installation. The guide helps on installing the required components of the setup:

  • Apache Zookeeper
  • Apache Mesos
  • Mesosphere’s Marathon
Running WildFly and the AeroGear UnifiedPush Server on Apache Mesos

Once the above setup is done and your Apache Mesos system is running, it’s pretty simple to launch a WildFly server and deploy the UnifiedPush Server to it.

Download the following bundles and place them somewhere into your hosted infrastructure:

Now you need to save this JSON:

to a file and submit it to the Marathon server, using curl:

 curl -i -H 'Content-Type: application/json' -d @unifiedpush-server.json localhost:8080/v2/apps

After Apache Mesos is done with downloading the artifacts from the uris section, it performs the steps chained in the cmd section. This is basically a set of shell commands that copy the UnfiedPush Server and its database file to a WildFly, which uses a PORT provided by the custer manager, instead of the default 8080 http port.

You are done – that’s all you need. On the Marathon UI you now see the URL and the PORT of the WildFly, containing the UnifiedPush Server:

Marathon Framework Web UI

 

Have fun with WildFly and the UnifiedPushServer on Apache Mesos!


21st Century Education Goes Digital with Oracle WebCenter

WebCenter Team - Mon, 2014-09-08 09:30
Oracle Corporation Banner 21st Century Education Goes Digital with Oracle WebCenter

Learn how The Digital Campus with WebCenter can address top-of-mind issues for creating exceptional digital learning experiences, put content in context for the user and optimize business processes

The global education market is under-going a fundamental transformation -- from the printed textbook and physical classroom to newer digital, online and mobile experiences.  Today, students can learn anywhere, anytime, from anyone on any device, bridging administrative and academic systems into single universal view.

Oracle WebCenter is at the center of innovation and engagement for any digital enterprise looking to empower exceptional experiences for students, faculty, administrators and researchers. It powerfully connects people, processes, and information with the most complete portfolio of portal, content management, Web experience management and collaboration technologies to enable student success.

Join this special event featuring the University of Pretoria, Fishbowl Solutions and Oracle, whose experts will illustrate successful design patterns and solution delivery for:
  • Student Portals. Create rich, interactive student experiences
  • Digital Repository. Deliver advanced content capture, tagging and sharing while securing enterprise data
  • Admissions. Leverage image capture and business process design to enable improved self-service
Attendees will benefit from the use-case insights and strategies of a world re-knowned university as well as a pre-built solution approach from Oracle and solutions partner Fishbowl to enable a truly modern digital campus.


Audio information:

Dial in Numbers: U.S / Canada: 877-698-7943 (toll free)
International: 706-679-0060(chargeable)
Passcode:
solutions2

Red Button Top Register Now Red Button Bottom Calendar Sep 11, 2014
10:00 AM PT |
 01:00 PM ET If you are an employee or official of a government organization, please click here for important ethics information regarding this event. Hardware and Software Engineered to Work Together Copyright © 2014, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

OOW - Focus On Support and Services for Business Analytics

Chris Warticki - Mon, 2014-09-08 08:00
Focus On Support and Services for Business Analytics   Tuesday, Sep 30, 2014

Conference Sessions

Fast-Track Big Data Implementation with the Oracle Big Data Platform
Suraj Krishnan, Director, Applications & Middleware, Oracle
Jegannath Sundarapandian, Technical Lead, Oracle
10:45 AM - 11:30 AM Intercontinental - Union Square CON7183 Wednesday, Oct 01, 2014

Conference Sessions

Modernize Your Analytics Solutions
Rob Reynolds, Senior Director, Oracle
Hermann Tse, Oracle
Gary Young, Senior Director, Big Data / Analytics, Oracle
10:15 AM - 11:00 AM Moscone West - 3016 CON5238 Oracle Analytics and Big Data: Unleash the Value
Lisa Dearnley-Davison, EMEA Consulting Director for Big Data, Oracle
Gary Young, Senior Director, Big Data / Analytics, Oracle
2:00 PM - 2:45 PM Intercontinental - Telegraph Hill CON3811 Thursday, Oct 02, 2014

Conference Sessions

Extreme Analytics with Oracle Exalytics
Phil Scott, Senior Principal Instructor, Oracle
9:30 AM - 10:15 AM Moscone West - 3016 CON8594 Best Practices for Supporting Oracle Hyperion EPM and Business Intelligence Solutions
Dave Valociek, Senior Director, Customer Support, Technology - EPM/BI, Oracle
Mitra Veluri, Senior Principal Technical Support Engineer, Oracle
12:00 PM - 12:45 PM Moscone West - 3008 CON8309   My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: http://www.oracle.com/goto/mondaymix 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.

Adding additional agents to OEM12c

DBASolved - Mon, 2014-09-08 07:52

One question I get asked a lot is “how can I add additional agent software to OEM 12c”?  The answer is pretty easy; just download and apply to the software library.  Now what does that mean?  In this post, I’ll explain how to download additional agents for later deployments to other platforms.

After logging into OEM 12c, go to the Setup -> Extensibility -> Self Update (Image 1).

Image 1:

SelfUpdate_Menu.png

 

 

 

 

 

 

 

 

 

 

Once on the Self Update page (Image 2), there are a few things to notice.  The first thing is that under Status, the Connection Mode is Online.  This is an indicator that OEM has been configured and connected to My Oracle Support (MOS).  Additional items under the Status area is when was the last refresh, last download time and the last download type.  Right under the Status section there is a menu bar with actions that can be performed on this page.  Clicking the Check Updates button will check for any new updates in all the Types listed.  Since we want to focus on Agents, click on the folder for Agent Software.

Image 2:

SelfUpdate_Page.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After clicking on the Agent Software folder, it takes us to the Agent Software Updates page for Self Updates (Image 3).  On this page, it can be seen clearly that there are a lot of agent software available.  On this page, we can see the Past Activities where we can see what actions have been performed against a particular version of the agent.

Image 3:
AgentSoftwareUpdatePage.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

On the menu bar (Image 4), we can search the agent software by either description or by example.  These search options take text search terms.  If we know there is a new release, it can be searched my simply entering text like ’12.1.0.4’.

Image 4:
SelfUpdate_AgentUpdate_bar.png

 

As we can see in Image 5, searching for agents that are the version ’12.1.0.4’, we get a list of available agents with that version.  Notice the Status column of the table.  There are two types of status listed.  These are two of the three statuses available.  The third status is Downloading; which indicates that a new agent is downloading.  The two status listed in Image 5 are: Applied and Available.

Image 5:
AgentUpdateSearch.png

 

 

 

 

 

Let’s define the Agent Software Update Statuses a bit more.  They are as follows:

  1. Available = This version of the agent is available for the OS Platform and can be downloaded
  2. Download in progress = This version of the agent is being downloaded to the OMS
  3. Downloaded = This version of the agent has been downloaded to the OMS
  4. Applied = This version of the agent has been applied to the Software Library and ready to use for agent deployments

Now that we know what the Status column means, how can an agent be downloaded?

While on the Agent Software Updates page, select and highlight an OS Platform that an agent is needed for.  In this example, lets use “Microsoft Windows x64 (64-bit)” (Image 6). Notice the Status column and Past Activities section.  This agent is available for download.  Download the agent by clicking the download button in the menu bar.

Image 6:
AgentUpdate_Win64.png

 

 

 

 

 

 

 

 

 

 

 

 

After clicking the Download button, OEM will ask you when to run the job (Image 7).  Normally running it immediately is fine.

Image 7:
AgentDownloadJob.png

 

 

 

Once the Status is set Downloaded, the agent software needs to be applied to the Software Library before it can be used (Image 8). Highlight the agent that was just downloaded and click the Apply button.  This will apply the binaries to the software library.  Also notice the Past Activities section; here we can clearly see what has been done with these agent binaries.

Image 8:
AgentSoftwareDownloaded.png

 

 

 

 

 

 

 

 

 

 

 

 

 

Once the Apply button has been clicked, OEM presents a message letting you know that the Apply operation will store the agent software in the software library (Image 9).  Click OK when we are ready.

Image 9:
AgentUpdateApplyMsg.png

 

 

 

 

 

The agent software is finally applied to the Software Library and ready to use (Image 10).

Image 10:
AgentAppliedtoSWLib.png

 

 

 

 

 

 

 

 

With the agent now applied to the Software Library, it can be used to deploy out to, via push or pull, Microsoft Windows hosts.

Note: In my experience most deployments to Microsoft Windows host have to be done with there with Cygwin or Silent installed.  If you would like more information on the silent install approach, I wrote a post on it here.

Enjoy!!

 

about.me: http://about.me/dbasolved

 


Filed under: OEM
Categories: DBA Blogs

Resize your Oracle datafiles down to the minimum without ORA-03297

Yann Neuhaus - Mon, 2014-09-08 06:03

Your datafiles have grown in the past but now you want to reclaim as much space as possible, because you are short on filesystem space, or you want to move some files without moving empty blocks, or your backup size is too large. ALTER DATABASE DATAFILE ... RESIZE can reclaim the space at the end of the datafile, down to the latest allocated extent.

But if you try to get lower, you will get:

ORA-03297: file contains used data beyond requested RESIZE value

So, how do you find this minimum value, which is the datafile's high water mark?
You have the brute solution: try a value. If it passes, then try a lower value. If it failed, then try a higher one.

Or there is the smart solution: find the datafile high water mark.

You can query DBA_EXTENTS to know that. But did you try on a database with a lot of datafiles? It runs forever. Because DBA_EXTENTS is doing a lot of joins that you don't need here. So my query directly reads SYS.X$KTFBUE which is the underlying fixed table that gives extent allocation in Locally Managed Tablespaces.
Note that the query may take a few minutes when you have a lot of tables, because the information is on disk, in each segment header, in the bitmaps used by LMT tablepaces. And you have to read all of them.

Here is my query:

set linesize 1000 pagesize 0 feedback off trimspool on
with
 hwm as (
  -- get highest block id from each datafiles ( from x$ktfbue as we don't need all joins from dba_extents )
  select /*+ materialize */ ktfbuesegtsn ts#,ktfbuefno relative_fno,max(ktfbuebno+ktfbueblks-1) hwm_blocks
  from sys.x$ktfbue group by ktfbuefno,ktfbuesegtsn
 ),
 hwmts as (
  -- join ts# with tablespace_name
  select name tablespace_name,relative_fno,hwm_blocks
  from hwm join v$tablespace using(ts#)
 ),
 hwmdf as (
  -- join with datafiles, put 5M minimum for datafiles with no extents
  select file_name,nvl(hwm_blocks*(bytes/blocks),5*1024*1024) hwm_bytes,bytes,autoextensible,maxbytes
  from hwmts right join dba_data_files using(tablespace_name,relative_fno)
 )
select
 case when autoextensible='YES' and maxbytes>=bytes
 then -- we generate resize statements only if autoextensible can grow back to current size
  '/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
   ||'M from '||to_char(ceil(bytes/1024/1024),999999)||'M */ '
   ||'alter database datafile '''||file_name||''' resize '||ceil(hwm_bytes/1024/1024)||'M;'
 else -- generate only a comment when autoextensible is off
  '/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
   ||'M from '||to_char(ceil(bytes/1024/1024),999999)
   ||'M after setting autoextensible maxsize higher than current size for file '
   || file_name||' */'
 end SQL
from hwmdf
where
 bytes-hwm_bytes>1024*1024 -- resize only if at least 1MB can be reclaimed
order by bytes-hwm_bytes desc
/

and here is a sample output:

/* reclaim    3986M from    5169M */ alter database datafile '/u01/oradata/DB1USV/datafile/o1_mf_undotbs1_o9pfojva_.dbf' resize 1183M;
/* reclaim    3275M from   15864M */ alter database datafile '/u01/oradata/DB1USV/datafile/o1_mf_apcpy_o5pfojni_.dbf' resize 12589M;
/* reclaim    2998M from    3655M */ alter database datafile '/u01/oradata/DB1USV/datafile/o1_mf_cpy_qt_oepfok3n_.dbf' resize 657M;
/* reclaim    2066M from    2250M */ alter database datafile '/u01/oradata/DB1USV/datafile/o1_mf_undotbs2_olpfokc9_.dbf' resize 185M;
/* reclaim     896M from    4000M */ alter database datafile '/u01/oradata/DB1USV/datafile/o1_mf_cpy_ocpfok3n_.dbf' resize 3105M;

You get directly the resize statements, with the reclaimable space in comments.

A few remarks about my query:

  • I generate the resize statements only for datafiles which are autoextensible. This is because I want to be sure that the datafiles can grow back to their original size if needed.
  • When datafile is not autoextensible, or maxsize is not higher than the current size, I only generate a comment.
  • When a datafile has no extents at all I generate a resize to 5MB. I would like to find the minimum possible size (without getting ORA-3214) but my test do not validate yet what is documented in MOS. If anyone has an idea, please share.
  • There is probably a way to get that high water mark in a cheaper way. Because the alter statement gives the ORA-03297 much quicker. Information is probably available in the datafile headers, without going to segment headers, but I don't know if it is exposed in a safe way. If you have an idea, once again, please share.

Note that I'm using that query for quite a long time. I even think that it was my first contribution to Oracle community on the web, about 9 years ago, in the dba-village website. Since then my contribution has grown to forums, blogs, articles, presentations, ... and tweets. Sharing is probably addictive ;)

ASSM Truncate.

Jonathan Lewis - Mon, 2014-09-08 04:34

Here’s one that started off with a tweet from Kevin Closson, heading towards a finish that shows some interesting effects when you truncate large objects that are using ASSM. To demonstrate the problem I’ve set up a tablespace using system allocation of extents and automatic segment space management (ASSM).  It’s the ASSM that causes the problem, but it requires a mixture of circumstances to create a little surprise.


create
	tablespace test_8k_auto_assm
	datafile	-- OMF
	SIZE 1030M
	autoextend off
	blocksize 8k
	extent management local
	autoallocate
	segment space management auto
;

create table t1 (v1 varchar2(100)) pctfree 99 tablespace test_8k_auto_assm storage(initial 1G);

insert into t1 select user from dual;
commit;

alter system flush buffer_cache;

truncate table t1;

I’ve created a table with an initial definition of 1GB, which means that (in a clean tablespace) the autoallocate option will jump straight to extents of 64MB, with 256 table blocks mapped per bitmap block for a total of 32 bitmap blocks in each 64MB extent. Since I’m running this on 11.2.0.4 and haven’t included “segment creation immediate” in the definition I won’t actually see any extents until I insert the first row.

So here’s the big question – when I truncate this table (using the given command) how much work will Oracle have to do ?

Exchanging notes over twitter (140 char at a time) and working from a model of the initial state, it took a little time to get to understand what was (probably) happening and then produce this silly example – but here’s the output from a snapshot of v$session_event for the session across the truncate:


Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
local write wait                                    490           0          83.26        .170          13
enq: RO - fast object reuse                           2           0         104.90      52.451         105
db file sequential read                              47           0           0.05        .001           0
db file parallel read                                 8           0           0.90        .112           0
SQL*Net message to client                            10           0           0.00        .000           0
SQL*Net message from client                          10           0           0.67        .067         153
events in waitclass Other                             2           0           0.04        .018         109

The statistic I want to highlight is the number recorded against “local write wait”: truncating a table of one row we wait for 490 blocks to be written! We also have 8 “db file parallel read”  waits which, according to a 10046 trace file, were reading hundreds of blocks. (I think the most significant time in this test – the RO enqueue wait – may have been waiting for the database writer to complete the work needed for an object checkpoint, but I’m not sure of that.)

The blocks written were the space management bitmap blocks for the extent(s) that remained after the truncate – even the ones that referenced extents above the high water mark for the table. Since we had set the tables initial storage to 1GB, we had a lot of bitmap blocks. At 32 per extent and 16 extents (64MB * 16 = 1GB) we might actually expect something closer to 512 blocks, but actually Oracle had formatted the last extent with only 8 space management blocks. and the first extent had an extra 2 to cater for the level 2 bitmap lock and segment header block giving: 32 * 15 + 8 + 2 = 490.

As you may have seen above, the impact on the test that Kevin was doing was quite dramatic – he had set the initial storage to 128GB (lots of bitmap blocks), partitioned the table (more bitmap blocks) and was running RAC (so the reads were running into waits for global cache grants).

I had assumed that this type of behaviour happened only with the “reuse storage” option of the truncate command: and I hadn’t noticed before that it also appeared even if you didn’t reuse storage – but that’s probably because the effect applies only to the bit you keep, which may typically mean a relatively small first extent. It’s possible, then, that in most cases this is an effect that isn’t going to be particularly visible in production systems – but if it is, can you work around it ? Fortunately another tweeter asked the question “What happens if you ‘drop all storage?'” Here’s the result from adding that clause to my test case:


Event                                             Waits   Time_outs           Csec    Avg Csec    Max Csec
-----                                             -----   ---------           ----    --------    --------
enq: RO - fast object reuse                           1           0           0.08        .079           0
log file sync                                         1           0           0.03        .031           0
db file sequential read                              51           0           0.06        .001           0
SQL*Net message to client                            10           0           0.00        .000           0
SQL*Net message from client                          10           0           0.56        .056         123
events in waitclass Other                             3           0           0.87        .289         186


Looking good – if you don’t keep any extents you don’t need to make sure that their bitmaps are clean. (The “db file sequential read” waits are almost all about the data dictionary, following on from my “flush buffer cache”).

Footnote 1: the same effect appears in 12.1.0.2
Footnote 2: it’s interesting to note that the RO enqueue wait time seems to parallel the local write wait time: perhaps a hint that there’s some double counting going on. (To be investigated, one day).


12cR2

Laurent Schneider - Sun, 2014-09-07 23:49

#db12cR2 release announced for 2016, Doc ID 742060.1

— laurentsch (@laurentsch) September 5, 2014

Say What? Buzzfeed follows up on D2L story with solid reporting

Michael Feldstein - Sun, 2014-09-07 13:14

In a post last month I questioned the growth claims that D2L was pushing to the media based on their recent massive funding round. A key part of the article was pointing out the lack of real reporting from news media.

It is worth noting that not a single media outlet listed by EDUKWEST or quoted above (WSJ, Reuters, Bloomberg, re/code, edSurge, TheStar) challenged or even questioned D2L’s bold claims. It would help if more media outlets didn’t view their job as paraphrasing press releases.

I should give credit where it’s due: Education reporter Molly Hensley-Clancy at Buzzfeed has done some solid reporting with her article out today.

In response to detailed questions from BuzzFeed News about figures to back up its claims of record growth in higher education and internationally, the company released a statement to BuzzFeed News, saying “As a private company, D2L does not publicly disclose these details. The past year has been one of record growth for D2L, culminating in the recent $85 million round of financing.” A representative declined to make the company’s CEO, or any other executive, available for an interview related to the company’s growth.

The stonewalling didn’t come as a surprise to former employees with whom BuzzFeed News spoke.

“The picture they’re painting of growth is not accurate,” said one former employee, who left the company within the last year and asked to remain anonymous, citing his confidentiality agreement with the company. “If you look at actual metrics, they tell a different story. They’re very likely not seeing growth in higher education.”

Molly’s article included discussions with three former D2L employees, an interview with CSU Channel Islands CIO Michael Berman, and a D2L official response (in a manner of speaking). Who would have thought that Buzzfeed would be the source of valuable reporting that challenges the all-too-easy headlines provided through press releases?

Me, for one. If you follow the Buzzfeed education articles, you’ll notice a pattern of this type of reporting – mostly focused on the business of education. Consider the following articles:

In each case, Molly challenges public perceptions, digs up unique information through interviews and document research, and describes the findings in a hard-hitting but balanced article. Buzzfeed is becoming an important source for education news and a welcome addition.

The post Say What? Buzzfeed follows up on D2L story with solid reporting appeared first on e-Literate.

API Integration with Zapier (Gmail to Salesforce)

Kubilay Çilkara - Sun, 2014-09-07 11:42
Recently I attended a training session with +General Assembly  in London titled, What and Why of APIs. It was a training session focusing on usage of APIs and it was not technical at all. I find these type of training sessions very useful as they describe concepts and controlling ideas behind technologies rather than the hands-on, involved implementation details.

What grabbed my attention from the many different and very useful public and private API tools, 'thingies', introduced in this training session was Zapier. - www.zapier.com

Zapier looked to me as a platform for integrating APIs with clicks rather than code, with declarative programming. Is a way of automating the internet. What you get when you sign up with them is the ability to use 'Zaps', or create your own zaps. Zaps are integration of endpoints, like connecting Foursquare to Facebook or Gmail to Salesforce and syncing them. One of the Zaps available does that, connects your Gmail emails to Salesforce using the Gmail and Salesforce APIs and lets you sync between them. Not only that, but Zapier Zaps also put triggers on the endpoints which allow you to sync only when certain conditions are true. For example the Gmail to Salesforce Zap can push your email into a Salesforce Lead only when an email with a certain subject arrives to your gmail inbox. This is what a Zapier platform looks like:


An individual Zap looks like this and is nothing more than a mapping of the Endpoints with some trigger actions and filters.


The environment is self-documenting and very easy to use. All you do is drag and drop gmail fields and match them with the Lead, or other custom object Salesforce fields. Then you configure the sync to happen only under certain conditions/filters. Really easy to set-up. The free version runs the sync every 5 hours, well good enough for me. The paid version runs the sync every 5 minutes. 
There is even capability to track historical runs and trigger a manual run via the Zap menu. See below the 'Run' command to run a Zap whenever you like. 

In my case I used the tool to create a Zap to do exactly what I just described. My Zap creates a Salesforce Lead automatically in my Salesforce org whenever a 'special' email is sent to me. Great automation!
This is a taste of the 'platform cloud' tools out there to do API to API and App to App integrations with clicks and not code. With tools like Zapier all you really need is, imagination!
More links:
Categories: DBA Blogs

Watch Oracle DB Elapsed Time and Wall Time With Parallel Query

Watch Oracle Elapsed Time and Wall Time With Parallel Query
In my recent postings I wrote that when using the Oracle Database parallel query a SQL statement's wall time should be equal to its elapsed time divided by the number of parallel query slaves plus some overhead.

That may seem correct, but is it really true? To check I ran an experiment and posted the results here. The results are both obvious and illuminating.

If you don't want to read but just sit on the couch, have a beer and watch TV you're in luck! I took a clip from my Tuning Oracle Using An AWR Report online video seminar put it on youtube.  You can watch the video clip on YouTube HERE or simply click on the movie below.



The Math, For Review Purposes
In my previous recent postings I detailed the key time parameters; DB Time, DB CPU, non-idle wait time, elapsed time, parallelism and effective parallelism. To save you some clicking, the key parameters and their relationships are shown below.

DB Time = DB CPU + NIWT

Elapsed Time = Sum of DB Time

Wall Time = ( Elapsed Time / Parallelism ) + Parallelism Overhead

Wall Time = Elapsed Time / Effective Parallelism


Test Results: When Oracle Parallel Query was NOT involved.
If you want to see my notes, snippets, etc. they can be found in this text file HERE.

Here is the non-parallel SQL statement.

select /*+ FULL(big2) NOPARALLEL (big2) */ count(*)
into   i_var
from   big2 
where  rownum < 9000000

When the SQL statement was running, I was monitoring the session using my Realtime Session Sampler OSM tool, rss.sql. Since I knew the server process session ID and wanted to sample every second and wanted to see everything just for this session, this is the rss.sql syntax:
SQL>@rss.sql 16 16 827 827 % % 1
For details on any OSM tool syntax, run the OSM menu script, osmi.sql. You can download my OSM Toolkit HERE.

The rss.sql tool output is written to a text file, which I was doing a "tail -f" on. Here is a very small snippet of the output. The columns are sample number, sample time, session SID, session serial#, Oracle username, CPU or WAIT, SQL_ID, OraPub wait category, wait event, [p1,p2,p3].


We can see the session is consuming CPU and waiting. When waiting, the wait event is "direct path read", which is asynchronous (we hope) block read requests to the IO subsystem that will NOT be buffered in the Oracle buffer cache.

Now for the timing results, which are shown in the below table. I took five samples.  It's VERY important to know that the wait time (WAIT_TIME_S), DB CPU (DB_CPU_S), and DB Time (DB_TIME_S) values are related to ONLY server process SID 16. In blazing contrast, the wall time (WALL_S), elapsed time (EL_VSQL_S), and SQL statement CPU consumption (CPU_VSQL_S) is related the entire SQL_ID statement execution.

Here are the "no parallel" experimental results.
SQL> select * from op_results;

SAMPLE_NO WALL_S EL_VSQL_S CPU_VSQL_S WAIT_TIME_S DB_CPU_S DB_TIME_S
---------- ---------- ---------- ---------- ----------- ---------- ----------
1 35.480252 35.470015 9.764407 24.97 9.428506 34.152294
2 35.670021 35.659748 9.778554 25.15 9.774984 35.541861
3 35.749926 35.739473 9.774375 25.12 9.31266 34.126285
4 35.868076 35.857752 9.772321 25.32 9.345398 34.273479
5 36.193062 36.18378 9.712962 25.46 9.548465 35.499693
Let's check the math. For simplicity and clarity, please allow me to round and use only sample 5.
DB_TIME_S = DB_CPU_S + WAIT_TIME_S
35.5 = 9.5 + 25.5 = 35.0
The DB Time is pretty close (35.5 vs 35.0). Close enough to demonstrate the time statistic relationships.
Elapsed Time (EL_VSQL_S) = DB_TIME_S
35.5 = 34.2
The Elapsed Time is off by around 4% (35.5 vs 34.2), but still closely to demonstrate the time statistic relationships.
Wall Time (WALL_S) = Elapsed Time (EL_VSQL_S) / Effective Parallelism
35.5 = 35.5 / 1
Nice! The Wall Time results matched perfectly. (35.5 vs 35.5)

To summarize in a non parallel query (i.e., single server process) situation, the time math results are what we expected! (and hoped for)


Test Results: When Oracle Parallel Query WAS involved.
The only difference in the "non parallel" SQL statement above and the SQL statement below is the parallel hint. Below is the "parallel" SQL statement.
select /*+  FULL(big2) PARALLEL(big2,3)  */ count(*) into i_var from big2 where rownum<9000000>
When the "parallel" SQL statement was running, because Oracle parallel query was involved resulting in multiple related Oracle sessions, when monitoring using my rss.sql tool, I need to open the session ID (and serial#) to include all sessions. I still sampled every second. Here is the rss.sql syntax:
SQL>@rss.sql 0 9999 0 9999 % % 1
The tool output is written to a text file, which I was doing a "tail -f" on. Here is a very small snippet of the output. I manually inserted the blank lines to make it easier to see the different sample periods.


There is only one SQL statement being run on this idle test system. And because there is no DML involved, we don't see much background process activity. If you look closely above, sessions 168 (see third column) must be a log write process because the wait event is "log file parallel write". I checked and session 6 is a background process as well.

It's no surprise to typically see only four session involved. One session is the parallel query coordinator and the three parallel query slaves! Interestingly, the main server process session that I executed the query from is session number 16. It never showed up in any of my samples! I suspect it was "waiting" on an idle wait event and I'm only showing processes consuming CPU or waiting on a non-idle wait event. Very cool.

Now for the timing results. I took five samples.  Again, it's VERY important to know that the wait time (WAIT_TIME_S), DB CPU (DB_CPU_S), and DB Time (DB_TIME_S) values are related to ONLY calling server process, which in this case is session 16. In blazing contrast, the wall time (WALL_S), elapsed time (EL_VSQL_S), and SQL statement CPU consumption (CPU_VSQL_S) is related the entire SQL statement execution.

Here are the "parallel" experimental results.
 SQL>  select * from op_results;

SAMPLE_NO WALL_S EL_VSQL_S CPU_VSQL_S WAIT_TIME_S DB_CPU_S DB_TIME_S
---------- ---------- ---------- ---------- ----------- ---------- ----------
1 46.305951 132.174453 19.53818 .01 4.069579 4.664083
2 46.982111 132.797536 19.371063 .02 3.809439 4.959602
3 47.79761 134.338069 19.739735 .02 4.170921 4.555491
4 45.97324 131.809249 19.397557 .01 3.790226 4.159572
5 46.053922 131.765983 19.754143 .01 4.062703 4.461175
Let's check the math. So simplicity and clarity, please allow me to round and use sample 5.
DB_TIME_S = DB_CPU_S + WAIT_TIME_S
4.5 = 4.1 + 0
The DB Time shown above is kind of close... 10% off. (4.5 vs 4.1) But there is for sure timing error in my collection sript. I take the position, this is close enough to demonstrate the time statistic relationships. Now look below.
Elapsed Time (EL_VSQL_S)  = DB_TIME_S
131.7 != 4.5
Woah! What happened here? (131.7 vs 4.5) Actually, everything is OK (so far aways) because the DB Time is related to the session (Session ID 16), whereas the elapsed time is ALL the DB Time for ALL the processes involved in the SQL statement. Since parallel query is involved, resulting in four additional sessions (1 coordinator, 3 slaves) we would expect the elapsed time to be greater than the DB Time. Now let's look at the wall time.
Wall Time (WALL_S) = ( Elapsed Time (EL_VSQL_S) / Parallelism ) + overhead
46.1 = ( 131.8 / 3 ) + 2.2
Nice! Clearly the effective parallelism is greater than 3 because there is some overhead (2.2). But the numbers makes sense because:

1. The wall time is less than the elapsed time because parallel query is involved.

2. The wall time is close to the elapsed time divided by the parallelism. And we can even see the parallelism overhead.

So it looks like our time math is correct!


Reality And The AWR Or Statspack Report
This is really important. In the SQL Statement section of any AWR or Statspack Report, you will see the total elapsed time over the snapshot interval and perhaps the average SQL ID elapsed time per execution. So what is the wall time? What are users experiencing? The short answer is, we do NOT have enough information.

To know the wall time, we need to know the parallelism situation. If you are NOT using parallel query, than based on the time math demonstrated above, the elapsed time per execution will be close to what the user experiencing (unless there is an issue outside of Oracle). However, if parallelism is involved, you can expect the wall time (i.e, user's experience) to be much less than the elapsed time per execution shown in the AWR or Statspack report.

Another way of looking at this is: If a user is reporting a query is taking 10 seconds, but the average elapsed time is showing as as 60 seconds, parallel query is probably involved. Also, as I mentioned above, never forget the average value is not always the typical value. (More? Check out my video seminar entitled, Using Skewed Data To Your Advantage HERE.)

Thanks for reading!

Craig.
https://resources.orapub.com/OraPub_Online_Training_About_Oracle_Database_Tuning_s/100.htmYou can watch all the online seminar introductions for free on YouTube!
If you enjoy my blog, subscribing will ensure you get a short-concise email about a new posting. Look for the form on this page.

 P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com.
Categories: DBA Blogs

RAC Database Backups

Hemant K Chitale - Sun, 2014-09-07 08:20
In 11gR2 Grid Infrastructure and RAC


UPDATE : 13-Sep-14 : How to run the RMAN Backup using server sessions concurrently on each node.  Please scroll down to the update.


In a RAC environment, the database backups can be executed from any one node or distributed across multiple nodes of the cluster.

In my two-node environment, I have backups configured to go to an FRA.  This is defined by the instance parameter "db_recovery_file_dest" (and "db_recovery_file_dest_size").  This can be a shared location -- e.g. an ASM DiskGroup or a ClusterFileSystem.  Therefore, the parameter should ideally be the same across all nodes so that backups may be executed from any or multiple nodes without changing the backup location.

Running the RMAN commands from node1 :
[root@node1 ~]# su - oracle
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 21:56:46 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter db_recovery_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string +FRA
db_recovery_file_dest_size big integer 4000M
SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 21:57:49 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
12 B F A DISK 26-NOV-11 1 1 YES TAG20111126T224849
13 B A A DISK 26-NOV-11 1 1 YES TAG20111126T230108
16 B A A DISK 16-JUN-14 1 1 YES TAG20140616T222340
18 B A A DISK 16-JUN-14 1 1 YES TAG20140616T222738
19 B F A DISK 16-JUN-14 1 1 NO TAG20140616T222742
20 B F A DISK 05-JUL-14 1 1 NO TAG20140705T173046
21 B F A DISK 16-AUG-14 1 1 NO TAG20140816T231412
22 B F A DISK 17-AUG-14 1 1 NO TAG20140817T002340

RMAN>
RMAN> backup as compressed backupset database plus archivelog delete input;


Starting backup at 07-SEP-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=111 RECID=77 STAMP=857685630
input archived log thread=2 sequence=37 RECID=76 STAMP=857685626
input archived log thread=2 sequence=38 RECID=79 STAMP=857685684
input archived log thread=1 sequence=112 RECID=78 STAMP=857685681
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220131_0.288.857685699 tag=TAG20140907T220131 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:09
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_111.307.857685623 RECID=77 STAMP=857685630
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_37.309.857685623 RECID=76 STAMP=857685626
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_38.277.857685685 RECID=79 STAMP=857685684
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_112.270.857685681 RECID=78 STAMP=857685681
Finished backup at 07-SEP-14

Starting backup at 07-SEP-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709 tag=TAG20140907T220145 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:06:15
Finished backup at 07-SEP-14

Starting backup at 07-SEP-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=113 RECID=81 STAMP=857686085
input archived log thread=2 sequence=39 RECID=80 STAMP=857686083
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220807_0.307.857686087 tag=TAG20140907T220807 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_113.309.857686085 RECID=81 STAMP=857686085
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_39.277.857686083 RECID=80 STAMP=857686083
Finished backup at 07-SEP-14

Starting Control File and SPFILE Autobackup at 07-SEP-14
piece handle=+FRA/racdb/autobackup/2014_09_07/s_857686089.277.857686097 comment=NONE
Finished Control File and SPFILE Autobackup at 07-SEP-14

RMAN>

Note how the "PLUS ARCHIVELOG" specification also included archivelogs from both threads (instances) of the database.

Let's verify these details from the instance on node2 :

[root@node2 ~]# su - oracle
-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 22:11:00 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN>

RMAN> list backup of database completed after 'trunc(sysdate)-1';

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
24 Full 258.21M DISK 00:06:12 07-SEP-14
BP Key: 24 Status: AVAILABLE Compressed: YES Tag: TAG20140907T220145
Piece Name: +FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709
List of Datafiles in backup set 24
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/system.257.765499365
2 Full 1160228 07-SEP-14 +DATA2/racdb/datafile/sysaux.256.765502307
3 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/undotbs1.259.765500033
4 Full 1160228 07-SEP-14 +DATA2/racdb/datafile/undotbs2.257.765503281
5 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/users.261.765500215
6 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/partition_test.265.809628399
7 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/hemant_tbs.266.852139375
8 Full 1160228 07-SEP-14 +DATA3/racdb/datafile/new_tbs.256.855792859

RMAN>

Yes, today's backup is visible from node2 as it retrieves the information from the controlfile that is common across all the instances of the database.

How are the archivelogs configured ?

RMAN> exit


Recovery Manager complete.
-sh-3.2$
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 22:15:51 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 39
Next log sequence to archive 40
Current log sequence 40
SQL>
SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string +FRA
db_recovery_file_dest_size big integer 4000M
SQL>

Both instances have the same destination configured for archivelogs and backups.
.
.
.
=======================================================
UPDATE : 13-Sep-14 :  Running the backup concurrently from both nodes 

There are two ways to have the RMAN Backup run from both nodes.
A.   Issue a seperate RMAN BACKUP DATAFILE or BACKUP TABLESPACE command from each node, such that the two nodes have an independent list of Datafiles / Tablespaces

B.  Issue a BACKUP DATABASE command from one node but with two channels open, one against each node.

Here, method A is easy to do but difficult to control as you add Tablespaces and Datafiles.  So, I will demonstrate method B.

I begin with ensuring that
a.  I have REMOTE_LOGIN_PASSWORDFILE configured so that I can make a SQLNet connection from node1 to node2  (RMAN requires the connect AS SYSDBA in 11g)
b.  I have a TNSNAMES.ORA entry configured to the instance on node2 (note that the service name is common across all [both] instances in the Cluster)

-sh-3.2$ hostname
node1.mydomain.com
-sh-3.2$ id
uid=800(oracle) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba),1021(dba)
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sat Sep 13 23:22:09 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter remote_login_passwordfile;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
remote_login_passwordfile string EXCLUSIVE
SQL> quit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ cat $ORACLE_HOME/network/admin/tnsnames.ora
# tnsnames.ora.node1 Network Configuration File: /u01/app/oracle/rdbms/11.2.0/network/admin/tnsnames.ora.node1
# Generated by Oracle configuration tools.

RACDB_1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = RACDB)
)
)

RACDB_2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = RACDB)
)
)

-sh-3.2$

Next, I start RMAN and allocate two Channels, one for each Instance (on each Node in the Cluster) and issue a BACKUP DATABASE that is automatically executed across both Channels.

-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Sep 13 23:23:24 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> run
2> {allocate channel ch1 device type disk connect 'sys/manager@RACDB_1';
3> allocate channel ch2 device type disk connect 'sys/manager@RACDB_2';
4> backup as compressed backupset database plus archivelog delete input;
5> }

using target database control file instead of recovery catalog
allocated channel: ch1
channel ch1: SID=61 instance=RACDB_1 device type=DISK

allocated channel: ch2
channel ch2: SID=61 instance=RACDB_2 device type=DISK


Starting backup at 13-SEP-14
current log archived
channel ch1: starting compressed archived log backup set
channel ch1: specifying archived log(s) in backup set
input archived log thread=2 sequence=40 RECID=82 STAMP=857687640
input archived log thread=1 sequence=114 RECID=84 STAMP=858204801
input archived log thread=2 sequence=41 RECID=83 STAMP=857687641
input archived log thread=1 sequence=115 RECID=86 STAMP=858208025
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed archived log backup set
channel ch2: specifying archived log(s) in backup set
input archived log thread=2 sequence=42 RECID=85 STAMP=858208000
input archived log thread=1 sequence=116 RECID=87 STAMP=858209078
input archived log thread=2 sequence=43 RECID=88 STAMP=858209079
channel ch2: starting piece 1 at 13-SEP-14
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.279.858209109 tag=TAG20140913T232445 comment=NONE
channel ch2: backup set complete, elapsed time: 00:00:26
channel ch2: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_42.296.858207997 RECID=85 STAMP=858208000
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_116.263.858209079 RECID=87 STAMP=858209078
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_43.265.858209079 RECID=88 STAMP=858209079
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.275.858209099 tag=TAG20140913T232445 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:56
channel ch1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_40.309.857687641 RECID=82 STAMP=857687640
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_114.295.858204777 RECID=84 STAMP=858204801
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_41.293.857687641 RECID=83 STAMP=857687641
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_115.305.858208001 RECID=86 STAMP=858208025
Finished backup at 13-SEP-14

Starting backup at 13-SEP-14
channel ch1: starting compressed full datafile backup set
channel ch1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed full datafile backup set
channel ch2: specifying datafile(s) in backup set
input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
channel ch2: starting piece 1 at 13-SEP-14
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.293.858209175 tag=TAG20140913T232557 comment=NONE
channel ch2: backup set complete, elapsed time: 00:12:02
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.305.858209163 tag=TAG20140913T232557 comment=NONE
channel ch1: backup set complete, elapsed time: 00:13:06
Finished backup at 13-SEP-14

Starting backup at 13-SEP-14
current log archived
channel ch1: starting compressed archived log backup set
channel ch1: specifying archived log(s) in backup set
input archived log thread=1 sequence=117 RECID=90 STAMP=858209954
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed archived log backup set
channel ch2: specifying archived log(s) in backup set
input archived log thread=2 sequence=44 RECID=89 STAMP=858209952
channel ch2: starting piece 1 at 13-SEP-14
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.265.858209957 tag=TAG20140913T233915 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:03
channel ch1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_117.309.858209953 RECID=90 STAMP=858209954
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.263.858209957 tag=TAG20140913T233915 comment=NONE
channel ch2: backup set complete, elapsed time: 00:00:03
channel ch2: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_44.295.858209951 RECID=89 STAMP=858209952
Finished backup at 13-SEP-14

Starting Control File and SPFILE Autobackup at 13-SEP-14
piece handle=+FRA/racdb/autobackup/2014_09_13/s_858209961.295.858209967 comment=NONE
Finished Control File and SPFILE Autobackup at 13-SEP-14
released channel: ch1
released channel: ch2

RMAN>

We can see that Channel ch1 was connected to Instance RACDB_1 and ch2 was connected to RACDB_2. Also, the messages indicate that both channels were running concurrently.
I also verified that the Channels did connect to each instance :

[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle 11205 1 1 23:24 ? 00:00:00 oracleRACDB_1 (LOCAL=NO)
You have new mail in /var/spool/mail/root
[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle 11205 1 3 23:24 ? 00:00:04 oracleRACDB_1 (LOCAL=NO)
[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle 11205 1 4 23:24 ? 00:00:49 oracleRACDB_1 (LOCAL=NO)
[root@node1 ~]#
[root@node2 ~]# ps -ef |grep RACDB_2 | grep LOCAL=NO
oracle 6233 1 0 23:24 ? 00:00:00 oracleRACDB_2 (LOCAL=NO)
You have new mail in /var/spool/mail/root
[root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
oracle 6233 1 0 23:24 ? 00:00:00 oracleRACDB_2 (LOCAL=NO)
[root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
oracle 6233 1 2 23:24 ? 00:00:24 oracleRACDB_2 (LOCAL=NO)
[root@node2 ~]#

As soon as I closed the RMAN (client) session, the two server processes also terminated.

This method (Method B) allows me to run an RMAN client session from any node in the Cluster and have RMAN server sessions running concurrently across all or some nodes of the Cluster, if I have not designated a single, specific node, as my RMAN Backups node.

Edit : I have demonstrated using ALLOCATE CHANNEL to run an adhoc, interactive, backup.  If you want to create a persistent script, you might want to use CONFIGURE CHANNEL and have the SYS password persisted in the configuration (saved in the controlfile) so that it is not in "plain text" in a script.

.
.
.

Categories: DBA Blogs

Calculating HTML ID for ADF UI Table Row

Andrejus Baranovski - Sun, 2014-09-07 07:22
Each row in ADF UI table is assigned with ID, this is how rows are referenced in HTML. I had a blog post describing how to set a focus for newly inserted row - Improving ADF UI Table CRUD Functionality with Auto Focus. I'm getting ID for selected row using getClientRowKey method and this method returns row identifier, the one which is used in HTML. Blog reader was trying to use the same method to get ID for any row from the table, but it didn't worked for him. The trick is how to construct a key properly, to use this key to retrieve ID. I'm going to describe it in this quick sample application below.

Sample application UI is straightforward - there is Get Cell Component Name button, when pressed it calls a listener method and prints IDs for each row in CompName column:


Row ID's are retrieved correctly, you can see it from the picture below. With the access to the ID, you could set focus for any row cell you want, not only for the cell from current row as in previous post. Printed row ID's:


You must use getClientRowKey method to retrieve row ID. There must be proper row key supplied, to get correct ID. When you are getting selected row key, there are no issues - but if you want to get ID for any row key, there is one thing to keep in mind. You must wrap row key into a collection (for example, ArrayList). Use this wrapped key to retrieve client row key:


Download sample application - ADFTableFocusApp_v2.zip.

An idealized log management and analysis system — from whom?

DBMS2 - Sun, 2014-09-07 06:38

I’ve talked with many companies recently that believe they are:

  • Focused on building a great data management and analytic stack for log management …
  • … unlike all the other companies that might be saying the same thing :)
  • … and certainly unlike expensive, poorly-scalable Splunk …
  • … and also unlike less-focused vendors of analytic RDBMS (which are also expensive) and/or Hadoop distributions.

At best, I think such competitive claims are overwrought. Still, it’s a genuinely important subject and opportunity, so let’s consider what a great log management and analysis system might look like.

Much of this discussion could apply to machine-generated data in general. But right now I think more players are doing product management with an explicit conception either of log management or event-series analytics, so for this post I’ll share that focus too.

A short answer might be “Splunk, but with more analytic functionality and more scalable performance, at lower cost, plus numerous coupons for free pizza.” A more constructive and bottoms-up approach might start with: 

  • Agents for any kind of machine that admits streams of data.
  • Parsers that:
    • Immediately identify explicit name-value pairs in popular formats such as JSON or XML.
    • Also immediately extract a significant fraction of all implicit fields in text strings — timestamps for sure, but also a lot else. (Splunk is the current gold standard for such capabilities.)
    • Allow you to easily write rules for more such extractions.
  • Immediate indexing in line with everything the parsers do.
  • Easy import of log files, relational tables, and other relevant data structures.
  • Queries that can exploit all the indexes, at least up to the functionality level of SQL 2003 analytics (including windowing) and StreamSQL, of course with …
  • … blazing scalable performance.
  • Strong workload management and concurrent performance support. (Teradata is the gold standard for such capabilities in the analytic sphere.)
  • Various other mature-DBMS features, e.g. in backup, manageability, and uptime.

Further, there would be numerous styles of business intelligence interface, at least including:

  • Generic BI like we generally see for tabular data.
  • Constantly-changing displays of streaming data.
  • BI with an event-series orientation.
  • Strong alerting.
  • Mobile versions of everything.

And there would be good support for quick-turnaround, easily-operationalized predictive analytics, of the sort that’s fairly central to the visions for Kiji and Spark.

The data management part of that is particularly hard, in that:

  • Different architectures seem naturally well-suited for different parts of the problem.
  • Maturing a new data management product is always difficult, costly and slow.

My thoughts on strengths and weaknesses of some obvious log data management contenders start:

  • Oracle, IBM, and Microsoft have a lot of heft in all things database. But while each of those vendors has great resources and occasionally impressive pieces of new database engineering, none shows much evidence of framing, let alone solving, the problem in the right way(s).
  • SAP owns Sybase, HANA, several old CEP companies, and Business Objects. Add them to the Oracle/IBM/Microsoft list.
  • Teradata has a lot going for them. Their core analytic data management strengths are obvious. They’ve owned Aster for a while, and Aster innovated nPath quite some time ago. They recently added Hadapt, a leader in schema-on-need, as well as Revelytix, which has some good ideas in dataset management. Like most other DBMS vendors, however, Teradata doesn’t yet have much of a story for streaming data, and anyhow the most optimistic case for Teradata involves the difficult task of stitching together disparate data management technologies.
  • HP Vertica has a decent position as well. Probably more proven in general concurrent, scalable performance than others in their peer group (Netezza, Greenplum, et al.), Vertica also was relatively early in innovations relevant to log analysis, including a range of time series/event series features and its own schema-on-need effort. Vertica was also founded by people who were also streaming pioneers (there were heavily overlapping groups of academics behind StreamBase, Vertica and VoltDB), but it’s not clear how that background is reflected in present Vertica product.
  • Splunk, of course, has a complete stack. At the data acquisition and parsing layers, it’s second to none, and it has a considerable set of log-appropriate BI capabilities as well. And for data management it in effect is stitching together two different inverted-list data stores, plus Hadoop.
  • Hadoop distribution vendors such as Cloudera, MapR or Hortonworks offer typically bundle a range of relevant capabilities. HDFS (Hadoop Distributed File System) is the default place to dump entire logs. In most distros, Spark offers a new approach to streaming. Impala, Drill and so on offer query. Flume gathers the log data in the first place. But a lot of the cooler capabilities are immature or unproven, and in some cases that’s putting it mildly.

In the interest of length, I’ll omit discussion of smaller vendors, except to say that Platfora’s integrated-stack event series analytics story deserves attention, and I’m disappointed that I never hear about Sumo Logic. And I don’t know a lot about companies positioned as SIEM (Security Information and Event Management), especially now that SenSage has left the scene.

Categories: Other

An idealized log management and analysis system — from whom?

Curt Monash - Sun, 2014-09-07 06:38

I’ve talked with many companies recently that believe they are:

  • Focused on building a great data management and analytic stack for log management …
  • … unlike all the other companies that might be saying the same thing :)
  • … and certainly unlike expensive, poorly-scalable Splunk …
  • … and also unlike less-focused vendors of analytic RDBMS (which are also expensive) and/or Hadoop distributions.

At best, I think such competitive claims are overwrought. Still, it’s a genuinely important subject and opportunity, so let’s consider what a great log management and analysis system might look like.

Much of this discussion could apply to machine-generated data in general. But right now I think more players are doing product management with an explicit conception either of log management or event-series analytics, so for this post I’ll share that focus too.

A short answer might be “Splunk, but with more analytic functionality and more scalable performance, at lower cost, plus numerous coupons for free pizza.” A more constructive and bottoms-up approach might start with: 

  • Agents for any kind of machine that admits streams of data.
  • Parsers that:
    • Immediately identify explicit name-value pairs in popular formats such as JSON or XML.
    • Also immediately extract a significant fraction of all implicit fields in text strings — timestamps for sure, but also a lot else. (Splunk is the current gold standard for such capabilities.)
    • Allow you to easily write rules for more such extractions.
  • Immediate indexing in line with everything the parsers do.
  • Easy import of log files, relational tables, and other relevant data structures.
  • Queries that can exploit all the indexes, at least up to the functionality level of SQL 2003 analytics (including windowing) and StreamSQL, of course with …
  • … blazing scalable performance.
  • Strong workload management and concurrent performance support. (Teradata is the gold standard for such capabilities in the analytic sphere.)
  • Various other mature-DBMS features, e.g. in backup, manageability, and uptime.

Further, there would be numerous styles of business intelligence interface, at least including:

  • Generic BI like we generally see for tabular data.
  • Constantly-changing displays of streaming data.
  • BI with an event-series orientation.
  • Strong alerting.
  • Mobile versions of everything.

And there would be good support for quick-turnaround, easily-operationalized predictive analytics, of the sort that’s fairly central to the visions for Kiji and Spark.

The data management part of that is particularly hard, in that:

  • Different architectures seem naturally well-suited for different parts of the problem.
  • Maturing a new data management product is always difficult, costly and slow.

My thoughts on strengths and weaknesses of some obvious log data management contenders start:

  • Oracle, IBM, and Microsoft have a lot of heft in all things database. But while each of those vendors has great resources and occasionally impressive pieces of new database engineering, none shows much evidence of framing, let alone solving, the problem in the right way(s).
  • SAP owns Sybase, HANA, several old CEP companies, and Business Objects. Add them to the Oracle/IBM/Microsoft list.
  • Teradata has a lot going for them. Their core analytic data management strengths are obvious. They’ve owned Aster for a while, and Aster innovated nPath quite some time ago. They recently added Hadapt, a leader in schema-on-need, as well as Revelytix, which has some good ideas in dataset management. Like most other DBMS vendors, however, Teradata doesn’t yet have much of a story for streaming data, and anyhow the most optimistic case for Teradata involves the difficult task of stitching together disparate data management technologies.
  • HP Vertica has a decent position as well. Probably more proven in general concurrent, scalable performance than others in their peer group (Netezza, Greenplum, et al.), Vertica also was relatively early in innovations relevant to log analysis, including a range of time series/event series features and its own schema-on-need effort. Vertica was also founded by people who were also streaming pioneers (there were heavily overlapping groups of academics behind StreamBase, Vertica and VoltDB), but it’s not clear how that background is reflected in present Vertica product.
  • Splunk, of course, has a complete stack. At the data acquisition and parsing layers, it’s second to none, and it has a considerable set of log-appropriate BI capabilities as well. And for data management it in effect is stitching together two different inverted-list data stores, plus Hadoop.
  • Hadoop distribution vendors such as Cloudera, MapR or Hortonworks offer typically bundle a range of relevant capabilities. HDFS (Hadoop Distributed File System) is the default place to dump entire logs. In most distros, Spark offers a new approach to streaming. Impala, Drill and so on offer query. Flume gathers the log data in the first place. But a lot of the cooler capabilities are immature or unproven, and in some cases that’s putting it mildly.

In the interest of length, I’ll omit discussion of smaller vendors, except to say that Platfora’s integrated-stack event series analytics story deserves attention, and I’m disappointed that I never hear about Sumo Logic. And I don’t know a lot about companies positioned as SIEM (Security Information and Event Management), especially now that SenSage has left the scene.

Automatically Applying Get Posted Attribute Method for Row Refresh

Andrejus Baranovski - Sat, 2014-09-06 10:44
There is out of the box ADF BC method available to refresh current row, see this post for details - Refreshing Single Row Without Full Rollback. There could be use cases, when refresh method is not sufficient (particularly for a row with dependent LOV's) - it may not reset data correctly. Also there is extra SQL query sent to DB, to fetch row data by key. Even it works well most of the time, still it is good to know the alternative. I'm going to present alternative row refresh approach here, using getPostedAttribute method.

User could edit data in the current row:


Press Refresh button:


All attributes are refreshed and synchronised back to the original values, currently available in the database:


UI data is synchronised with the help of Change Event Policy = PPR functionality enabled for iterator in the bindings, we can see synchronisation events executed in the log:


You should know - ADF BC method getPostedAttribute is protected, this is why we need to have a wrapper method in EO Impl, with public access. Wrapper method allows to invoke originally protected method from different class, other than EO Impl:


They key logic resides in refreshCurrentRow custom method, implemented in VO Impl class. This method gets a full list of EO attributes and for all attributes with index higher or equal to 0 (there could be accessors with negative index), it goes and retrieves posted attribute value. Current value is reset back with posted value - this is how attribute value is reset back to the same as it is posted to DB. Sample application is set to use DB pooling, this means it will always return actual value committed to DB, and it will ignore any temporary posted values (each request will get different DB connection):


Row refresh method is exposed to be accessible from the bindings layer:


As it was mentioned above, iterator in the bindings is set to use Change Event Policy = PPR, this is synchronising data displayed on UI with changes in ADF BC automatically:


Keep in mind, no matter if using standard row refresh method or approach described in this post - transaction still will remain dirty, only data will be reset. To clear transaction and revert it back to non dirty, user still must use full Rollback operation.

Download sample application - CustomRowRefreshApp.zip.

First AZORA usergroup meeting October 23

Bobby Durrett's DBA Blog - Fri, 2014-09-05 17:52

Just got the invitation to the first AZORA (Arizona Oracle user group) meeting on October 23.  Here is the link: url

It’s 2 pm at Oracle’s office, 2355 E Camelback Rd Ste 950, Phoenix, AZ.

I’m looking forward to it!

– Bobby

Categories: DBA Blogs

Pythian at Oracle OpenWorld 2014

Pythian Group - Fri, 2014-09-05 14:41

Calling all Pythian fans, clients, and partners! It’s that time of year again with Oracle OpenWorld 2014 fast approaching! Pythian is excited to be participating once again with our rockstar team of experts in all things Oracle including Database 12c, Oracle Applications (EBS, GoldenGate) and engineered systems, MySQL, and more. We are thrilled to have multiple Pythian folks presenting sessions as listed below with more attending in tow,  including our newest friends & colleagues formerly of BlackbirdIT. Keep a look out for our signature black “Love Your Data” t-shirts.

We’re also excited to again be co-hosting the Annual Bloggers Meetup with our good friends at the Oracle Technology Network. Keep your eyes peeled for a blog post from Alex Gorbachev, Pythian’s CTO, providing details including contest fun & reviews of past years of mayhem and madness.

It’s not Oracle OpenWorld without a conference within a conference. Queue Oaktable World and an action packed agenda for all the hardcore techies out there. Catch Alex and Jeremiah up on Tuesday.

Vasu Balla will also  be attending the Oracle DIS Partner Council Meeting and Oracle EBS ATG Customer Advisory Board, and helping share Pythian’s thought leadership.

 

Attention Pythian Partners & clients, if you’re attending please reach out to us for details on social happenings you won’t want to miss!

Pythian’s dynamic duo of Emilia (Partner Program Mgr/kutrovska@pythian.com/1 613 355 5038) & Vanessa (Dir. of BD/simmons@pythian.com/1 613 897 9444) are orchestrating this year’s efforts. We’ll be live tweeting up to the minute show action from @pythianpartners and are the best way to get a hold of any of the Pythian team.

See you there! #oow14 #pythianlife

 

 

Pythian Sessions at Oracle OpenWorld 2014

Thou Shalt Not Steal: Securing Your Infrastructure in the Age of Snowden
Presented by Paul Vallee 
(@paulvallee)
Sunday, Sep 28, 9:00 AM – 9:45 AM – Moscone South – 310

Session ID UGF9199: “In June 2013, Edward Snowden triggered the most costly insider security leak in history, forcing organizations to completely rethink how they secure their infrastructure. In this session, the founder of Pythian discusses how he supervises more than 200 database and system administrators as they perform work on some of the world’s most valuable and mission-critical data infrastructures.”

 

24/7 Availability with Oracle Database Application Continuity
Presented by Jeremiah Wilton (@oradebugand Marc Fielding (@mfild)
Sunday, Sep 28, 9:00 AM – 9:45 AM – Moscone South – 309

Session ID UGF2563: “Oracle Real Application Clusters (Oracle RAC) enables databases to survive hardware failures that would otherwise cause downtime. Transparent application failover and fast application notification can handle many failure scenarios, but in-flight transactions still require complex application-level state tracking. With application continuity, Java applications can now handle failure scenarios transparently to applications, without data loss. In this session, see actual code and a live demonstration of application continuity during a simulated failure.”

 

Time to Upgrade to Oracle Database 12c
Presented by Michael Abbey (@MichaelAbbeyCAN)
Sunday, Sep 28, 9:00 AM – 9:45 AM – Moscone South – 307

Session ID UGF2870: “Oracle Database 12c has been out for more than a year now. There is a handful of off-the-shelf features of Oracle Database 12c that can serve the growing requirements of all database installations, regardless of the applications they support and the options for which an installation is licensed. This session zeros in on the baseline enhancements to the 12c release, concentrating on the likes of the Oracle Recovery Manager (Oracle RMAN) feature of Oracle Database; pluggable databases; and a handful of new opportunities to perform many resource-intensive operations by splitting work among multiple separate processes.”

 

Oracle RMAN in Oracle Database 12c: The Next Generation
Presented by René Antunez (@grantunez)
Sunday, Sep 28, 10:00 AM – 10:45 AM – Moscone South – 309

Session ID UGF1911: “The Oracle Recovery Manager (Oracle RMAN) feature of Oracle Database has evolved since being released, in Oracle8i Database. With the newest version of Oracle Database, 12c , Oracle RMAN has great new features that will enable you to reduce your downtime in case of a disaster. In this session, you will learn about the new features introduced in Oracle Database 12c and how you can take advantage of them from the first day you upgrade to this version.”

 

Experiences Using SQL Plan Baselines in Production
Presented by Nelson Calero (@ncalerouy)
Sunday, Sep 28, 12:00 PM – 12:45 PM – Moscone South – 250

Session ID UGF7945: “This session shows how to use the Oracle Database SQL Plan Baselines functionality, with examples from real-life usage in production (mostly Oracle Database 11g Release 2) and how to troubleshoot it. SQL Plan Baselines is a feature introduced in Oracle Database 11g to manage SQL execution plans to prevent performance regressions. The presentation explains concepts and presents examples, and you will encounter some edge cases.”

 

Getting Started with Database as a Service with Oracle Enterprise Manager 12c
Presented by René Antunez
(@grantunez)
Sunday, Sep 28, 3:30 PM – 4:15 PM – Moscone South – 307

Session ID UGF1941: “With the newest version of Oracle Database 12c, with Oracle Multitenant, we are moving toward an era of provisioning databases to our clients faster than ever, even leaving out the DBA and enabling the developers and project leads to provision their own databases. This presentation gives you insight into how to get started with database as a service (DBaaS) and the latest version of Oracle Enterprise Manager, 12c, and get the benefit of this upcoming database era.”

 

Using the Oracle Multitenant Option to Efficiently Manage Development and Test Databases
Presented by Marc Fielding (@mfild) and Alex Gorbachev (@alexgorbachev)
Wednesday, Oct 1, 12:45 PM – 1:30 PM – Moscone South – 102

Session ID CON2560: “The capabilities of Oracle Multitenant for large-scale database as a service (DBaaS) environments are well known, but it provides important benefits for nonproduction environments as well. Developer productivity can be enhanced by providing individual developers with their own separate pluggable development databases, done cost-effectively by sharing the resources of a larger database instance. Data refreshes and data transfers are simple and fast. In this session, learn how to implement development and testing environments with Oracle Multitenant; integrate with snapshot-based storage; and automate the process of provisioning and refreshing environments while still maintaining high availability, performance, and cost-effectiveness.”

 

Oracle Database In-Memory: How Do I Choose Which Tables to Use It For?
Presented by Christo Kutrovsky (@kutrovsky)
Wednesday, Oct 1, 4:45 PM – 5:30 PM – Moscone South – 305

Session ID CON6558: “Oracle Database In-Memory is the most significant new feature in Oracle Database 12c. It has the ability to make problems disappear with a single switch. It’s as close as possible to the fast=true parameter everyone is looking for. Question is, How do you find which tables need this feature the most? How do you find the tables that would get the best benefit? How do you make sure you don’t make things worse by turning this feature on for the wrong table? This highly practical presentation covers techniques for finding good candidate tables for in-memory, verifying that there won’t be a negative impact, and monitoring the improvements afterward. It also reviews the critical inner workings of Oracle Database In-Memory that can help you better understand where it fits best.”

 

Customer Panel: Private Cloud Consolidation, Standardization, & Automization
Presented by Jeremiah Wilton (@oradebug)
Thursday, Oct 2, 12:00 PM – 12:45 PM – Moscone South – 301

Session ID CON10038: “Attend this session to hear a panel of distinguished customers discuss how they transformed their IT into agile private clouds by using consolidation, standardization, and automation. Each customer presents an overview of its project and key lessons learned. The panel is moderated by members of Oracle’s private cloud product management team.”

 

Achieving Zero Downtime During Oracle Application and System Migrations – Co-presented with Oracle
Presented by Gleb Otochkin (@sky_vst) and Luke Davies (@daviesluke)
Thursday, Oct 2, 10:45 AM – 11:30 AM – Moscone West – 3018

Session ID CON7655: “Business applications—whether mobile, on-premises, or in the cloud—are the lifeline of any organization. Don’t let even planned outage events such as application upgrades or database/OS migrations hinder customer sales and acquisitions or adversely affect your employees’ productivity. In this session, hear how organizations today are using Oracle GoldenGate for Oracle Applications such as Oracle E-Business Suite and the PeopleSoft, JD Edwards, Siebel, and Oracle ATG product families in achieving zero-downtime application upgrades and database, hardware, and OS migrations. You will also learn how to use Oracle Data Integration products for real-time, operational reporting without degrading application performance. That’s Oracle AppAdvantage, and you can have it too.”

 

Categories: DBA Blogs

Nobody Bunts With Two Strikes

Floyd Teter - Fri, 2014-09-05 13:14
Once upon a time, I coached a young women's fast pitch softball team.  Big adventure, as most of my coaching experience is with baseball, and I really enjoyed it.  One game, the opposing team's catcher was hitting with two outs and two strikes.  I shouted out to my team to stop covering the bunt - nobody bunts with two strikes (because a foul ball off a bunt attempt is strike three).  So my team's infield draw back.  Then the catcher bunts, laughing at me all as she jogs down to first base with a clean infield hit.  Yeah, I ate some serious humble pie.  And I learned to never bet on the past as an absolute limitation on possibilities for the present and future.

Today I know enterprise application developers who take the attitude that they've never had to worry about the user before, so why start now?  Hold that thought for a moment...

I've really enjoyed the unfolding story at Infor.  Their tag line is "Beautiful business software for your business processes."  Infor has baked the concept of beautiful design into their corporate culture, even so far as to invest in design firm Hook and Loop to drive design as a part of their corporate culture.  Infor actually considers design as a product and corporate differentiator.  Seems to be working for them.  $3B in annual revenue growing at a 40%+ clip is nothing to sneeze at.  And I suspect a bit of that success comes from the emphasis on User Experience design brought to Infor by CEO and Oracle alum Charles Phillips.

Oracle?  Yup.  The UX team at Oracle has proven that user experience design is a differentiating factor in the marketplace.  Simplified UI has played well with potential Fusion/Cloud customers.  So well, in fact, that the E-Business Suite is now adopting Simplified UI.  And the PeopleTools team seems to have enabled the adoption of many Simplified UI design patterns with the 8.54 release.  And that UX team continues to innovate with improved user experiences (which is much more than just UI) utilizing Fusion Middleware.

Oracle, Infor, Workday, SAP...they've all embraced the concept (admittedly, some more than others) that beautiful design sells while not-so-beautiful design is a competitive hinderance.

Now, let's consider that thought again.  "I've never had to worry about the user before, so why start now?"  Yeah, and nobody bunts with two strikes.

Thoughts? Opinions?  Find the comments.



OpenWorld 2014

Jim Marion - Fri, 2014-09-05 10:24

OpenWorld is only a couple of weeks away. As always, this promises to be an outstanding conference. Whether your focus is functional or technical, OpenWorld has a lot of PeopleSoft sessions. The Focus on PeopleSoft OpenWorld document contains a good list of PeopleSoft focused sessions and events. I look forward to seeing you at OpenWorld this year. Here are some of the places and times where you can find me:

Monday

Demo Grounds: PeopleSoft User Experience from 12:00 PM to 2:00 PM

Tuesday

Demo Grounds: PeopleSoft User Experience from 9:45 AM to 11:00 AM


Session ID: CON7568
Session Title: PeopleSoft PeopleTools Developer: Tips and Techniques
Venue / Room: Moscone West - 3004/3006
Date and Time: 9/30/14, 17:00 - 17:45
Wednesday

Demo Grounds: PeopleSoft User Experience from 9:45 AM to 12:00 PM

Thursday

Session ID: CON7537
Session Title: Connecting PeopleSoft HCM, Oracle Taleo, Oracle HCM Cloud, and More
Venue / Room: Palace - Twin Peaks South
Date and Time: 10/2/14, 9:30 - 10:15

Meet the Authors Book Signing at the OpenWorld Bookstore in Moscone South Upper Hall Lobby from 1:00 PM to 1:30 PM.

See you there!!