Skip navigation.

DBA Blogs

Partner Webcast – Oracle PaaS: Oracle Mobile Cloud Service Development

Mobile computing has experienced explosive growth in the past decade, and this is just the beginning. At the heart of any organizations’ digital strategy, Mobile is the primary screen and engagement...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Step by Step Jan 2016 PSU Patch Apply on 12c Grid and RDBMS Homes in Linux

Pakistan's First Oracle Blog - Tue, 2016-02-09 20:05

Following step by step action plan is for single instance database stored on ASM in 12.1.0.2 on Linux (OEL 6 64 bit in this case.)






Step Description ETA 1 Update the OPATCH utility:
For Database home:
$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/product/12.1.0/db_1$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch version
For Grid home:
$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/12.1.0.2/grid$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch version 15 min 2 Create ocm.rsp file:
Note: Press Enter/Return key and don't provide any input and say Yes.
$ export ORACLE_HOME=/u01/app/oracle/12.1.0.2/grid$ $ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /stage/ocm.rsp 5 min 3 Validation of Oracle Inventory
Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.
For database home:
$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/product/12.1.0/db_1
For Grid home:
$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/12.1.0.2/grid
If this command succeeds, it lists the Oracle components that are installed in the home. Save the output so you have the status prior to the patch apply. 5 min 4 Stage the Patch:
$ mkdir /stage/PSUpatch$ cp /stage/p22191349_121020_Linux-x86-64.zip /stage/PSUpatch
Check that the directory is empty.$ cd /stage/PSUpatch$ ls
Unzip the patch as grid home owner.
$ unzip p22191349_121020_.zip 5 min 5 One-off Patch Conflict Detection and Resolution:
Run it with root user:
/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp
It will ask to rollback identical patches like this:
Analyzing patch(es) on "/u01/app/oracle/12.1.0.2/grid" ...Patch "/stage/PSUpatch/22191349/21436941" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.Patch "/stage/PSUpatch/22191349/21948341" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.Patch "/stage/PSUpatch/22191349/21948344" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.Patch "/stage/PSUpatch/22191349/21948354" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.
So first rollback above 4 patches by going to their directory and issuing with grid owner from grid home:
opatch rollback -id 21948354 -local -oh /u01/app/oracle/12.1.0.2/grid (Repeat for all 4 patches)
Note: In some cases, weirdly, I had to shutdown the has services with root user before patch rollback by using:
/u01/app/oracle/12.1.0.2/grid/bin/crsctl stop has -f
After this again run:
/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp
If analyze command fail then use this with root user:
$ORA_GI_HOME/crs/install/roothas.pl –postpatch
It will start the has services too.
Then again run the analyze command as given above:
It will show something like:
Analyzing patch(es) on "/u01/app/oracle/12.1.0.2/grid" ...Patch "/stage/PSUpatch/22191349/21436941" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.Patch "/stage/PSUpatch/22191349/21948341" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.Patch "/stage/PSUpatch/22191349/21948344" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.Patch "/stage/PSUpatch/22191349/21948354" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.
Now you are good to apply the patch. Proceed to next step.



10 min 6 Apply the Patch: (Note: This should apply patch in both GI and RDBMS Home but its unreliable in that sense so after this completes, we need to check opatch lsinventory to make sure that it also applied patches in RDBMS Home)
As root user, execute the following command:
# /u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -ocmrf /stage/ocm.rsp
In case if it doesn’t apply in RDBMS Home, then run:
/u01/app/oracle/product/12.1.0/db_1/OPatch/opatchauto apply /stage/PSUpatch/22191349 -oh /u01/app/oracle/product/12.1.0/db_1 -ocmrf /stage/ocm.rsp
Make sure the above applies both OCW and PSU patches. You can verify that from opatch lsinventory. If only OCW patch is present in output and no PSU (which is likely the case), then issue following from Oracle home with oracle database owner after shutting down database:
/u01/app/oracle/product/12.1.0/db_1/OPatch/opatch apply -oh /u01/app/oracle/product/12.1.0/db_1 -local /stage/PSUpatch/22191349/21948354 60 min 7 Loading Modified SQL Files into the Database:
% sqlplus /nologSQL> Connect / as sysdbaSQL> startupSQL> quit% cd $ORACLE_HOME/OPatch% ./datapatch -verbose 60 min 8 Check for the list of patches applied to the database.
SQL> select action_time, patch_id, patch_uid, version, status, bundle_series, description from dba_registry_sqlpatch; 5 min
Categories: DBA Blogs

Resolving Update-Update Conflict in Peer-to-Peer Replication

Pythian Group - Mon, 2016-02-08 15:20

Recently I had received a hand-off ticket which was about a replication issue. The system has been configured with the replication of Peer-to-Peer type.

One of the subscribers was throwing an error which was reading like A conflict of type ‘Update-Update’ was detected at peer 4 between peer 100 (incoming), transaction id 0x000000000011a1c3 and peer 100 (on disk). While I was working on this issue and trying to resolve it, I noticed that it wasn’t showing any records in msrepl_errors table or conflict_dbo_table. p2p1 p2p2

Here again, the Error Logs help, as they have the complete details logged in, which help us identify the table name and exact error with the record. If that hadn’t been the case, I would have followed the Replication Troubleshooting method describe in KB 3066750 to fix the issue.

At this time, with the information we had in hand, we reached out to the customer and resolved the issue by fixing it manually. I would like to mention that there are always two ways conflicts are handled in P2P replication:

1) Manually fix the conflict/data issue
2) Let the winner node have precedence about the data/conflict In P2P replication

At the time of configuration, we will have an option to choose which node will have precedence and can be declared the winner. This is decided by the way of originator_i; the highest originator_id will win. We will have to decide this carefully, as once the setup is done, orginator_id is allotted it can not be altered later.

Here are few reference article that will help you understand this topic better:

https://msdn.microsoft.com/en-IN/library/bb934199%28v=sql.105%29.aspx
https://msdn.microsoft.com/en-IN/library/ms151191%28v=sql.105%29.aspx http://blogs.msdn.com/b/change_sql_virtual_ip_from_dhcp_to_static_ip/archive/2015/11/04/conflicts-in-peer-to-peer-replication.aspx
http://blogs.msdn.com/b/repltalk/archive/2010/02/07/repltalk-start-here.aspx

 

Categories: DBA Blogs

New ORAchk 12.1.0.2.6 beta

Pythian Group - Mon, 2016-02-08 15:17

 

Oracle recently released new beta 12.1.0.2.6 version for the ORAchk utility. If you are an Oracle DBA and still not friendly with the utility, I advise you to try it out. In short, the utility is a proactive tool and scan your system for known issues providing an excellent report in html format. In addition to that, you are getting collection manager to manage reports for multiply databases, check for upgrade readiness and other features. I strongly recommend trying the utility and using it regularly.
You can download the new version of the ORAchk, Health Check Catalog and all related support files and guides from Oracle support (Document 1268927.2). Simply unzip the ORAchk to a directory and run the orachk preferably as root since it allows to execute all system wide checks. Here is an example:

[oracle@bigdatalite u01]$ mkdir orachk
[oracle@bigdatalite u01]$ cd orachk/
[oracle@bigdatalite orachk]$ unzip ../distr/orachk121026.zip
Archive: ../distr/orachk121026.zip
inflating: CollectionManager_App.sql
inflating: sample_user_defined_checks.xml
creating: .cgrep/
................
[oracle@bigdatalite orachk]$ su -
[root@bigdatalite ~]# cd /u01/orachk/
[root@bigdatalite orachk]# ./orachk

At the end you are getting an html report and zip file with results of all executed checks:

Detailed report (html) – /u01/orachk/orachk_bigdatalite_orcl_012816_151905/orachk_bigdatalite_orcl_012816_151905.html

UPLOAD(if required) – /u01/orachk/orachk_bigdatalite_orcl_012816_151905.zip

The report is really good looking, split to different sections, and allows you to hide or show checks based on their status.

I compared the new 12.1.0.2.6 version against 12.1.0.2.5. The execution time for the new version was 3 minutes versus 8 minutes for the old one. The new format for report was way more usable; you don’t need to jump back and forth since result for every check expand on the same place.
If you haven’t used the utility so far I highly recommend you download and try it out.

Categories: DBA Blogs

Log Buffer #459: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2016-02-08 14:40

This Log Buffer Edition arranges few tips and tricks from the blogs of Oracle, SQL Server and MySQL.

Oracle:

Oracle ® Solaris innovation is due in part to the UNIX® the standard (1), the test suites (2) and the certification (3). By conforming to the standard, using the test suites and driving to certification, Oracle ® Solaris software engineers can rely on stable interfaces and an assurance that any regressions will be found quickly given more than 50,000 test cases.

Building on the program established last year to provide evaluation copies of popular FOSS components to Solaris users, the Solaris team has announced the immediate availability of additional and newer software, ahead of official Solaris releases.

Tracing in Oracle Reports 12c.

Issues with Oracle Direct NFS.

An interesting observation came up on the Oracle-L list server a few days ago that demonstrated how clever the Oracle software is at minimising run-time work, and how easy it is to think you know what an execution plan means when you haven’t actually thought through the details – and the details might make a difference to performance.

SQL Server:

Manipulating Filetable Files Programatically

Auto-suggesting foreign keys and data model archaeology

Create/write to an Excel 2007/2010 spreadsheet from an SSIS package.

Tabular vs Multidimensional models for SQL Server Analysis Services.

The PoSh DBA – Towards the Re-usable PowerShell Script.

MySQL:

MyRocks vs InnoDB with Linkbench over 7 days.

MySQL has been able to harness the potential of more powerful (CPU) and larger (RAM, disk space.

Setup a MongoDB replica/sharding set in seconds.

MySQL 5.7 makes secure connections easier with streamlined key generation for both MySQL Community and MySQL Enterprise, improves security by expanding support for TLSv1.1 and TLSv1.2, and helps administrators assess whether clients are connecting securely or not.

While EXPLAIN shows the selected query plan for a query, optimizer trace will show you WHY the particular plan was selected. From the trace you will be able to see what alternative plans was considered, the estimated costs of different plans, and what decisions was made during query optimization.

Categories: DBA Blogs

Oracle Big Data 2016 Implementation Boot Camp (19 Feb to 04 March)

February 2016 ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Trace Files -- 11b : Using DBMS_SQLDIAG to trace the Optimization of an SQL Statement

Hemant K Chitale - Sun, 2016-02-07 07:46
My previous blogpost covered using ALTER SYSTEM/SESSION to set tracing for a specific SQL_ID that has been determined in advance.   The SQL may be executed in the future after the ALTER SYSTEM/SESSION.

Here is a method for an SQL that has already been executed.

SQL> select count(*) from all_objects_many_list
2 where created > sysdate-365;

COUNT(*)
----------
25548

SQL> begin
2 dbms_sqldiag.dump_trace(p_sql_id=>'b086mzzp82x7w',
3 p_component=>'Optimizer',
4 p_file_id=>'OPT_TRACE_b086mzzp82x7w');
5 end;
6 /

PL/SQL procedure successfully completed.

SQL>
SQL> select value from v$diag_info where name = 'Default Trace File';

VALUE
------------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2987_OPT_TRACE_b086mzzp82x7w.trc

SQL>


Let's review the trace file.

Registered qb: SEL$1 0x99b9000 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

SPM: statement not found in SMB
SPM: statement not a candidate for auto-capture

**************************
Automatic degree of parallelism (ADOP)
**************************
Automatic degree of parallelism is disabled: Parameter.

PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=0b1t991khf449) -----
/* SQL Analyze(148,0) */ select count(*) from all_objects_many_list
where created > sysdate-365
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
0x6be0d530 145 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be0d530 12098 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be6b738 1229 package body SYS.DBMS_SQLDIAG
0x6becaea8 2 anonymous block
*******************************************
Legend
The following abbreviations are used by optimizer trace.
CBQT - cost-based query transformation
JPPD - join predicate push-down
OJPPD - old-style (non-cost-based) JPPD
FPD - filter push-down
PM - predicate move-around
CVM - complex view merging
SPJ - select-project-join
SJC - set join conversion
SU - subquery unnesting
OBYE - order by elimination
OST - old style star transformation
ST - new (cbqt) star transformation
CNT - count(col) to count(*) transformation
JE - Join Elimination
JF - join factorization
CBY - connect by
SLP - select list pruning
DP - distinct placement
qb - query block
LB - leaf blocks
DK - distinct keys
LB/K - average number of leaf blocks per key
DB/K - average number of data blocks per key
CLUF - clustering factor
NDV - number of distinct values
Resp - response cost
Card - cardinality
Resc - resource cost
NL - nested loops (join)
SM - sort merge (join)
HA - hash (join)
CPUSPEED - CPU Speed
IOTFRSPEED - I/O transfer speed
IOSEEKTIM - I/O seek time
SREADTIM - average single block read time
MREADTIM - average multiblock read time
MBRC - average multiblock read count
MAXTHR - maximum I/O system throughput
SLAVETHR - average slave I/O throughput
dmeth - distribution method
1: no partitioning required
2: value partitioned
4: right is random (round-robin)
128: left is random (round-robin)
8: broadcast right and partition left
16: broadcast left and partition right
32: partition left using partitioning of right
64: partition right using partitioning of left
256: run the join in serial
0: invalid distribution method
sel - selectivity
ptn - partition
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
*************************************
PARAMETERS WITH ALTERED VALUES
******************************
Compilation Environment Dump
is_recur_flags = 8
Bug Fix Control Environment


*************************************
PARAMETERS WITH DEFAULT VALUES
******************************
Compilation Environment Dump
optimizer_mode_hinted = false
optimizer_features_hinted = 0.0.0
parallel_execution_enabled = true
parallel_query_forced_dop = 0
parallel_dml_forced_dop = 0
parallel_ddl_forced_degree = 0
parallel_ddl_forced_instances = 0
_query_rewrite_fudge = 90
optimizer_features_enable = 11.2.0.4
_optimizer_search_limit = 5
.... continued to a long list of parameters ........
....................................................
Bug Fix Control Environment
fix 3834770 = 1
fix 3746511 = enabled
fix 4519016 = enabled
.... continued to a long list of bug fixes ........
...................................................
***************************************
PARAMETERS IN OPT_PARAM HINT
****************************
***************************************
Column Usage Monitoring is ON: tracking level = 1
***************************************

Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for 0b1t991khf449.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
OBYE: Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE: OBYE bypassed: no order by to eliminate.
CVM: Considering view merge in query block SEL$1 (#0)
OJE: Begin: find best directive for query block SEL$1 (#0)
OJE: End: finding best directive for query block SEL$1 (#0)
CNT: Considering count(col) to count(*) on query block SEL$1 (#0)
*************************
Count(col) to Count(*) (CNT)
*************************
CNT: COUNT() to COUNT(*) not done.
query block SEL$1 (#0) unchanged
Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for 0b1t991khf449.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
SU: Considering subquery unnesting in query block SEL$1 (#0)
********************
Subquery Unnest (SU)
********************
SJC: Considering set-join conversion in query block SEL$1 (#0)
*************************
Set-Join Conversion (SJC)
*************************
SJC: not performed
PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
PM: PM bypassed: Outer query contains no views.
PM: PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
FPD: Considering simple filter push in query block SEL$1 (#0)
"ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
try to generate transitive predicate from check constraints for query block SEL$1 (#0)
finally: "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365

apadrv-start sqlid=398332482954924169
:
call(in-use=2008, alloc=16344), compile(in-use=56240, alloc=59416), execution(in-use=2456, alloc=4032)

*******************************************
Peeked values of the binds in SQL statement
*******************************************

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(*)" FROM "HEMANT"."ALL_OBJECTS_MANY_LIST" "ALL_OBJECTS_MANY_LIST" WHERE "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
kkoqbc: optimizing query block SEL$1 (#0)

:
call(in-use=2056, alloc=16344), compile(in-use=57320, alloc=59416), execution(in-use=2456, alloc=4032)

kkoqbc-subheap (create addr=0x7f4409c4fb18)
****************
QUERY BLOCK TEXT
****************
select count(*) from all_objects_many_list
where created > sysdate-365
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (optimizer): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=0 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
Using NOWORKLOAD Stats
CPUSPEEDNW: 937 millions instructions/sec (default is 100)
IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
IOSEEKTIM: 10 milliseconds (default is 10)
MBRC: NO VALUE blocks (default is 8)


And here is the actual information about how the Costing is done and Execution Plan determined.

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
#Rows: 7197952 #Blks: 98279 AvgRowLen: 93.00 ChainCnt: 0.00
Index Stats::
Index: ALL_OBJ_M_L_CRTD_NDX Col#: 7
LVLS: 2 #LB: 19093 #DK: 1232 LB/K: 15.00 DB/K: 351.00 CLUF: 432893.00
Access path analysis for ALL_OBJECTS_MANY_LIST
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]
Column (#7): CREATED(
AvgLen: 8 NDV: 1232 Nulls: 0 Density: 0.000812 Min: 2455803 Max: 2457343
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
Card: Original: 7197952.000000 Rounded: 1313133 Computed: 1313133.42 Non Adjusted: 1313133.42
Access Path: TableScan
Cost: 27174.11 Resp: 27174.11 Degree: 0
Cost_io: 26619.00 Cost_cpu: 6242311042
Resp_io: 26619.00 Resp_cpu: 6242311042
Access Path: index (index (FFS))
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 5173.00 resc_cpu: 4598699894
ix_sel: 0.000000 ix_sel_with_filters: 1.000000
Access Path: index (FFS)
Cost: 5581.95 Resp: 5581.95 Degree: 1
Cost_io: 5173.00 Cost_cpu: 4598699894
Resp_io: 5173.00 Resp_cpu: 4598699894
Access Path: index (IndexOnly)
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 3486.00 resc_cpu: 287452140
ix_sel: 0.182432 ix_sel_with_filters: 0.182432
Cost: 3511.56 Resp: 3511.56 Degree: 1
Best:: AccessPath: IndexRange
Index: ALL_OBJ_M_L_CRTD_NDX
Cost: 3511.56 Degree: 1 Resp: 3511.56 Card: 1313133.42 Bytes: 0

***************************************


OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]#0
***********************
Best so far: Table#: 0 cost: 3511.5623 card: 1313133.4203 bytes: 10505064
***********************
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
Enumerating distribution method (advanced)

Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
id=0 frofkks[i] (index start key) predicate="ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
Final cost for query block SEL$1 (#0) - All Rows Plan:
Best join order: 1
Cost: 3511.5623 Degree: 1 Card: 1313133.0000 Bytes: 10505064
Resc: 3511.5623 Resc_io: 3486.0000 Resc_cpu: 287452140
Resp: 3511.5623 Resp_io: 3486.0000 Resc_cpu: 287452140
kkoqbc-subheap (delete addr=0x7f4409c4fb18, in-use=26384, alloc=32840)
kkoqbc-end:
:
call(in-use=8664, alloc=49288), compile(in-use=59856, alloc=63560), execution(in-use=2456, alloc=4032)

kkoqbc: finish optimizing query block SEL$1 (#0)
apadrv-end
:
call(in-use=8664, alloc=49288), compile(in-use=60768, alloc=63560), execution(in-use=2456, alloc=4032)


Starting SQL statement dump

user_id=87 user_name=HEMANT module=SQL*Plus action=
sql_id=0b1t991khf449 plan_hash_value=1689651126 problem_type=3
----- Current SQL Statement for this session (sql_id=0b1t991khf449) -----
/* SQL Analyze(148,0) */ select count(*) from all_objects_many_list
where created > sysdate-365
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
0x6be0d530 145 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be0d530 12098 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be6b738 1229 package body SYS.DBMS_SQLDIAG
0x6becaea8 2 anonymous block
sql_text_length=96
sql=/* SQL Analyze(148,0) */ select count(*) from all_objects_many_list
where created > sysdate-365
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
-------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3512 | |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | INDEX RANGE SCAN | ALL_OBJ_M_L_CRTD_NDX| 1282K | 10M | 3512 | 00:00:43 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("CREATED">SYSDATE@!-365)

Content of other_xml column
===========================
db_version : 11.2.0.4
parse_schema : HEMANT
plan_hash : 1689651126
plan_hash_2 : 1742296710
Outline Data:
/*+
BEGIN_OUTLINE_DATA

*** 2016-02-07 21:29:15.838
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
DB_VERSION('11.2.0.4')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX(@"SEL$1" "ALL_OBJECTS_MANY_LIST"@"SEL$1" ("ALL_OBJECTS_MANY_LIST"."CREATED"))
END_OUTLINE_DATA
*/

Optimizer state dump:
Compilation Environment Dump
optimizer_mode_hinted = false
optimizer_features_hinted = 0.0.0
parallel_execution_enabled = true
parallel_query_forced_dop = 0
parallel_dml_forced_dop = 0
parallel_ddl_forced_degree = 0
parallel_ddl_forced_instances = 0
_query_rewrite_fudge = 90
optimizer_features_enable = 11.2.0.4
_optimizer_search_limit = 5
...... long list of optimizer parameters ...........
.....................................................
Bug Fix Control Environment
fix 3834770 = 1
fix 3746511 = enabled
fix 4519016 = enabled
...... long list of Bug Fixes .......................
.....................................................

Query Block Registry:
SEL$1 0x99b9000 (PARSER) [FINAL]

:
call(in-use=11728, alloc=49288), compile(in-use=90704, alloc=155568), execution(in-use=6408, alloc=8088)

End of Optimizer State Dump
Dumping Hints
=============
====================== END SQL Statement Dump ======================


So, this is also a supported method.  This DBMS_SQLDIAG package is available from 11.2
Unfortunately, however, DUMP_TRACE is not documented !  (see the 11.2 documentation on DBMS_SQLDIAG).

Note : If the SQL Statement and/or Plan have already been aged out / purged from the Shared Pool , a DUMP_TRACE would, obviously, not be able to print anything.
.
.
.

Categories: DBA Blogs

SQL On The Edge #8 – SQL Server Mobile Report Publisher

Pythian Group - Fri, 2016-02-05 14:07

One of the cool things about SQL Server is that it comes bundled with all the Business Intelligence services with the core database engine license. Reporting Services (which includes the Mobile Report Publisher), Analysis Services, and Integration Services are all integrated, and are ready to get going as soon as you install SQL Server. This has made it not only cost-efficient for many organizations to deploy BI, but it has also contributed to a wide adoption among the SQL Server customer base.

 

What is the Mobile Report Publisher?

Currently in preview, the Mobile Report Publisher is a new report and dashboard editor that publishes reports to Reporting Services, and it’s part of the bigger road map that Microsoft has for their Business Intelligence On-Premises story. We all know that in the cloud, Power BI has been getting a large amount of investment, but with on-premises there was a big gap that was getting wider and wider, until now.

With this upcoming SQL 2016 release, the Microsoft team is focusing on bringing Reporting Services into the future as a one-stop integrated solution for BI deployment so that cloud BI or a competitor’s product (*cough*Tableau*cough) are not the only modern alternatives.

This Reporting Services refactor is the biggest change made to the product since SQL Server 2005 was released over 10 years ago. Leveraging the best parts of the Datazen acquisition, the Microsoft team is looking to provide a cohesive BI story that integrates web-based and mobile reports, Report Builder reports, and Power View style of modern visuals.


How is it different?

You’re probably used to working with SSRS Report Builder and are wondering what the idea is with Mobile Report Publisher. The demo below will make it very clear, but let’s just say that reports have now been split in two types:

1. Paginated reports: this is the ‘legacy’ style report that is built by Report Builder. It looks more flat, has the same controls as before, and is fully compatible with all the existing reports you have already deployed.

2. Mobile reports: Even though the name says ‘mobile’ these reports work just as well on desktop, tablet and mobile. If you’re familiar with web design, then the appropriate term would be that these reports are “responsive”. They can be done once, and will display nicely across devices. This is the new experience that the Mobile Report Publisher is targeting.

 

Where do we go from here?

Right now the current version is SQL Server 2016 CTP 3.2. We still have several different versions that will be released before SQL 2016 goes RTM later on this year. Currently, a lot of the planned functionality is not entirely there for the main SSRS interface, and you will be asked to switch to the ‘”classic” view often.

The Mobile Report Publisher experience is also very much targeted towards creating a visualization, and there’s no data modeling to be found. You pretty much need to have your datasets pre-made and in the format that the tool will expect, or you won’t get very far. Hopefully, at some point the team will add modeling capabilities like the robust experience we already have on the Power Bi desktop tool.

If you want to practice and get your feet wet with this release you can do it now, and for free by visiting SQL Server 2016 CTP 3.2. You can also preview the Mobile Report Publisher. Like I said before, this is a big departure so most people will want to train up to be ready when the new versions go live.

 

Demo

In the demo below I’m going to walk through the new SSRS interface preview in CTP 3.2, and then walk through creating a quick dashboard for both desktop and mobile consumption using the publisher. Let’s check it out!

Discover more about our expertise with SQL Server in the Cloud.

Categories: DBA Blogs

General troubleshooting lessons from recent Delphix issue

Bobby Durrett's DBA Blog - Fri, 2016-02-05 11:25

Delphix support helped me resolve an issue yesterday and the experience gave me the idea of writing this post about several general computer issue troubleshooting tips that I have learned down through the years. Never mind that I ignored these lessons during this particular problem. This is more of a “do as I say” and not a “do as I do” story.  Actually, some times I remember these lessons. I didn’t do so well this week. But the several mistakes that I made resolving this recent Delphix issue motivate me to write this post and if nothing else remind myself of the lessons I’ve learned in the past about how to resolve a computer problem.

Don’t panic!

I’m reminded of the friendly advice on the cover of the Hitchhiker’s Guide to the Galaxy: “Don’t panic!”. So, yesterday it was 4:30 pm. I had rebooted the Delphix virtual machine and then in a panic had the Unix team reboot the HP Unix target server. But, still I could not bring up any of the Delphix VDBs.  We had people coming over to our house for dinner that night and I was starting to worry that I would be working on this issue all night. I ended up getting out of the office by 5:30 pm and had a great dinner with friends. What was I so stressed about? Even the times that I have been up all night it didn’t kill me. Usually the all night issues lead to me learning things anyway.

Trust support

The primary mistake that I made was to get my mind fixed on a solution to the problem instead of working with Delphix support and trusting them to guide us to the solution. We had a number of system issues due to a recent network issue and I got my mind set on the idea that my Delphix issue was due to some network hangup. I feel sorry for our network team because it seems like the first thought people have any time there is some issue is that it is a “network issue”. I should know better. How many times have I been working on issues when everyone says it is a “database issue” and I’m annoyed because I know that the issue is somewhere else and they are not believing me when I point to things outside the database. Anyway, I opened a case with Delphix on Monday when I couldn’t get a VDB to come down. It just hung for 5 minutes until it gave me an error. I assumed that it was a network hangup and got fixated on rebooting the Delphix VM. Ack! Ultimately, I ended up working with two helpful and capable people in Delphix support and they resolved the issue which was not what I thought at all. There are times to disagree with support and push for your own solution but I did this too early in this case and I was dead wrong.

Keep it simple

I’ve heard people refer to Occam’s razor which I translate in computer terms to mean “look for simple problems first”. Instead of fixing my mind on some vague network issue where the hardware is not working properly, how about assuming that all the hardware and software is working normally and then thinking about what problems might cause my symptoms? I can’t remember how many times this has bit me. There is almost always some simple explanation.  In this case I had made a change to a Unix shell script that runs when someone logs in as the oracle user. This caused Delphix to no longer be able to do anything with the VDBs on that server. Oops! It was a simple blunder, no big deal. But I’m kicking myself for not first thinking about a simple problem like a script change instead of focusing on something more exotic.

What changed?

I found myself saying the same dumb thing that I’ve heard people say to me all the time: nothing changed. In this case I said something like “this has worked fine for 3 years now and nothing has changed”. The long-suffering and patient Delphix support folks never called me on this, but I was dead wrong. Something had to have changed for something that was working to stop working. I should have spent time looking at the various parts of our Delphix setup to see if anything had changed before I contacted support. All I had to do was see the timestamp on our login script and I would see that something had recently changed.

Understand how it all works

I think my Delphix skills are a little rusty. We just started a new expansion project to add new database sources to Delphix. It has been a couple of years since I’ve done any heavy configuration and trouble shooting. But I used to have a better feel for how all the pieces fit together. I should have thought about what must have gone on behind the scenes when I asked Delphix to stop a VDB and it hung for 5 minutes. What steps was it doing? Where in the process could the breakdown be occurring? Delphix support did follow this type of reasoning to find the issue. They manually tried some of the steps that the Delphix software would do automatically until they found the problem. If I stopped to think about the pieces of the process I could have done the same. This has been a powerful approach to solving problems all through my career. I think about resolving PeopleSoft issues. It just helps to understand how things work. For example, if you understand how the PeopleSoft login process works you can debug login issues by checking each step of the process for possible issues. The same is true for Oracle logins from clients. In general, the more you understand all the pieces of a computer system, down to the transistors on the chips, the better chance you have of visualizing where the problem might be.

Well, I can’t think of any other pearls of wisdom from this experience but I thought I would write these down while it was on my mind. Plus, I go on call Monday morning so I need to keep these in mind as I resolve any upcoming issues. Thanks to Delphix support for their good work on this issue.

Categories: DBA Blogs

Network multicast support on Azure

Pythian Group - Thu, 2016-02-04 15:07

 

Today I would like to talk about multicast support on Azure, and how to make it work. While it’s not the most required feature on a Virtual environment, nevertheless, some applications require multicast support for networks. The perfect example is Oracle RAC, where multicast is required starting from version 11.2.0.2. In Oracle RAC, multicast is used for highly available IP (HAIP) on interconnect. If you’re thinking about building a training environment with Oracle RAC on Azure you will need the multicast support.

How can we check if it works, or if it’s working now? First, you can check if it’s supported by your kernel using the netstat utility.

[root@oradb5 ~]# netstat -g | grep mcast
lo 1 all-systems.mcast.net
eth0 1 all-systems.mcast.net
eth1 1 all-systems.mcast.net

You can see that all my interfaces are ready for the multicast support. That’s fine, but how can we check if it works on our network? We can use either iperf utility or a perl script created by Oracle. You can download the script from Oracle support if you have account, from the Oracle note “How to Validate Network and Name Resolution Setup for the Clusterware and RAC (Doc ID 1054902.1)”.
Here’s what I got: I have two Azure VM A3 size with Oracle Linux 6, with two network interfaces each. The VM hostnames were oradb5 and oradb6. You can check out my blog on how to make an Azure VM with two network interface here. The second interface eth1 is one where we are going to enable multicast.

I ran the mcasttest.pl script and saw that:

[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i eth1
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 15:05:23 | Multicast Failed for eth1 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 15:05:53 | Multicast Failed for eth1 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

The output clearly tells us that we don’t have multicast support for either for 230.0.1.0 or 224.0.0.251 multicast addresses.

What does the Virtual Network FAQ for Azure tell us about it?
Here is the answer:

Do VNets support multicast or broadcast?
No. We do not support multicast or broadcast.
What protocols can I use within VNets?
You can use standard IP-based protocols within VNets. However, multicast, broadcast, IP-in-IP encapsulated packets and Generic Routing Encapsulation (GRE) packets are blocked within VNets. Standard protocols that work include:
* TCP
* UDP
* ICMP

So, we need a workaround. Luckily we have one. Some time ago, while discussing RAC on Amazon AWS, I was pointed to an article written by my former colleague Jeremiah Wilton, where he had described how he could work around the same problem on Amazon. You can read the article here. I decided to give a try and see if it works for Azure.

We are going to use a Peer-to-Peer VPN n2n provided by ntop.
They have mentioned that the development for the product has been put on hold, but the tool is still widely used and provides an acceptable solution for our problem. I used a Stuart Buckell’s article on how to set it up and it worked for me.
We could just use an already precompiled package, but compiling the utility from the sources provides us with an opportunity to disable encryption and compression, or change any other options.

Here is what I’ve done:
Installed kernel headers to be able to compile the n2n :

[root@oradb5 n2n_v2]# yum install kernel-headers
ol6_UEK_latest | 1.2 kB 00:00
ol6_u4_base | 1.4 kB 00:00
ol6_u4_base/primary | 2.7 MB 00:00
ol6_u4_base 8396/8396
Setting up Install Process
Resolving Dependencies
....

Installed subversion utility:

[root@oradb5 /]# yum install subversion.x86_64
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package subversion.x86_64 0:1.6.11-15.el6_7 will be installed
.............

Downloaded the sources using svn:

[root@oradb5 /]# svn co https://svn.ntop.org/svn/ntop/trunk/n2n
Error validating server certificate for 'https://svn.ntop.org:443':
- The certificate hostname does not match.
Certificate information:
- Hostname: shop.ntop.org
- Valid: from Sun, 15 Nov 2015 00:00:00 GMT until Wed, 14 Nov 2018 23:59:59 GMT
- Issuer: COMODO CA Limited, Salford, Greater Manchester, GB
- Fingerprint: fb:a6:ff:a7:58:f3:9d:54:24:45:e5:a0:c4:04:18:d5:58:91:e0:34
(R)eject, accept (t)emporarily or accept (p)ermanently? p
A n2n/n2n_v1
A n2n/n2n_v1/lzodefs.h
A n2n/n2n_v1/README
...............

Disabled encryption and compression using this article
Changed directory to n2n/n2n_v2 and compiled it.

[root@oradb5 n2n_v2]# make
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c n2n.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c n2n_keyfile.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c wire.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c minilzo.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c twofish.c
..............................

Copied files to the both my servers (oradb5 and oradb6) to /usr/sbin directory:

[root@oradb5 n2n_v2]# cp supernode /usr/sbin/
[root@oradb5 n2n_v2]# cp edge /usr/sbin/

Start a supernode daemon on the 1-st node. We only need it running on one machine, and it can even be a totally different machine. I am using port 1200 for it:

[root@oradb5 ~]# supernode -l 1200
[root@oradb5 ~]#

Started the edge on both servers. On oradb5 I am creating a sub-interface with IP 192.168.1.1 and providing some parameters:
-E – Accept multicast MAC addresses (default=drop).
-r – Enable packet forwarding through n2n community.
-c – n2n community name the edge belongs to.
-l – our supernode address:port.

[root@oradb5 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.1 -E -r

[root@oradb6 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.2 -E -r

So we are getting an interface edge0 on both nodes and can use it for connection required multicast:

[root@oradb5 ~]# ifconfig edge0
edge0 Link encap:Ethernet HWaddr 52:CD:8E:20:3D:E5
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::50cd:8eff:fe20:3de5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:592 (592.0 b)

[root@oradb5 ~]#

On the second box:

[root@oradb6 ~]# ifconfig edge0
edge0 Link encap:Ethernet HWaddr 7E:B1:F1:41:7B:B7
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::7cb1:f1ff:fe41:7bb7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:362 (362.0 b)

[root@oradb6 ~]#

Now we can run our multicast test again for edge0 interface and see how it works.

[oracle@oradb5 ~]$ cd mcasttest/
[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i edge0
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 16:22:12 | Multicast Succeeded for edge0 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 16:22:13 | Multicast Succeeded for edge0 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

As you can see, the test has completed successfully. So, the edge0 interface can be used now for any connections requiring multicast support.

In my next article I will show you how to create an Oracle RAC on Azure using the created multicast interface and a shared storage.

Categories: DBA Blogs

Cloning 10.2.0.3 Oracle Home on fully patched 11.31 HP-UX hangs

Bobby Durrett's DBA Blog - Thu, 2016-02-04 11:38

I based this blog post on information that I learned from this Oracle Support document:

Runinstaller And Emctl Do Not Work After Upgrading HP-UX 11.31 To 11.31 Update3 (Sep 2008) (Doc ID 780102.1)

My situation was slightly different from what Oracle’s note describes so I thought it would be helpful to document what I found.

In my case I am cloning an Oracle 10.2.0.3 home on to a fully patched HP-UX 11.31 server. I have used this same clone process on Oracle 11.1, 11.2, and 12.1 Oracle Homes with no issues. The symptom is that the 10.2 clone process just hangs with no helpful messages.

I searched Oracle’s support site and the web for issues with 10.2 cloning and could not find anything that matched my symptoms. I then decided to give up on cloning and try to install the base 10.2.0.1 binaries and then patch to match the home that I was trying to clone. The 10.2.0.1 install also hung.  But, I know that the 10.2.0.1 install works since we have used it on many other similar systems. But, they were on HP-UX 11.23 and not the fully patched HP-UX 11.31. So, I searched for installer issues on 11.31 and 10.2 and found the Oracle document listed above.

Evidently there is some bug with the JDK that Oracle included in 10.2 so that it does not work with HP-UX 11.31 with the current patches. A later version of Java resolves the issue.

Now that I understood the issue I decided to go back to the clone and try to apply the recommendations from the Oracle note, even though it doesn’t mention cloning.

The Oracle note suggests adding the –jreLoc /opt/java1.4 option to the runInstaller command line. The only catch is that my HP system did not have /opt/java1.4.  The oldest java we have installed is in /opt/java1.5. So, I tried the clone with the -jreLoc /opt/java1.5 option and it got almost to the end of the clone before it hung doing some emctl step.  Then I realized that I needed to follow the steps in the Oracle note to rename the Oracle Home’s jdk directory and set up a link to the Java1.5 directory.  So, I did these steps to point to the correct jdk directory:

cd $ORACLE_HOME

mv jdk jdk.orig

ln -s /opt/java1.5 jdk

Then I ran the clone with this command line:

$ORACLE_HOME/oui/bin/runInstaller -jreLoc /opt/java1.5 -clone -silent ORACLE_HOME=$ORACLE_HOME ORACLE_BASE=$ORACLE_BASE ORACLE_HOME_NAME=$ORACLE_HOME_NAME

Success!

It wasn’t that hard to apply the note to the clone situation but I thought it was worth blogging it. If someone googles runInstaller clone hang 10.2 11.31 and needs the solution they will find it.

Of course, I may be the only person in the world cloning a 10.2 Oracle Home on an HP-UX Itanium 11.31 system, but it’s here if someone needs it.

Bobby

Categories: DBA Blogs

Recover from ORA-01172 & ORA-01151

DBASolved - Tue, 2016-01-19 07:48

This morning I was working on an Oracle Management Repository (OMR) for a test Enterprise Manager that is used by a few consultants I work with. When I logged into the box, I found that the OMR was down. When I went to start the database, I was greeted with ORA-01172 and ORA-01151.

These errors basically say:

ORA-01172 – recovery of thread % stuck at block % of file %
ORA-01151 – use media recovery to recover block, restore backup if needed

So how do I recover from this. The solution is simple, I just needed to perform the following steps:

1. Shutdown the database

SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

2. Mount the database

SQL> startup mount;
ORACLE instance started.
Total System Global Area 1.0033E+10 bytes
Fixed Size 2934696 bytes
Variable Size 1677723736 bytes
Database Buffers 8321499136 bytes
Redo Buffers 30617600 bytes
Database mounted.

3. Recover the database

SQL> recover database;
Media recovery complete.

4. Open the database with “alter database”

SQL> alter database open;
Database altered.

At this point, you should be able to access the database (OMR) and then have the EM environment up and running.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Database
Categories: DBA Blogs

Recover from ORA-01172 & ORA-01151

DBASolved - Tue, 2016-01-19 07:48

This morning I was working on an Oracle Management Repository (OMR) for a test Enterprise Manager that is used by a few consultants I work with. When I logged into the box, I found that the OMR was down. When I went to start the database, I was greeted with ORA-01172 and ORA-01151.

These errors basically say:

ORA-01172 – recovery of thread % stuck at block % of file %
ORA-01151 – use media recovery to recover block, restore backup if needed

So how do I recover from this. The solution is simple, I just needed to perform the following steps:

1. Shutdown the database

SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

2. Mount the database

SQL> startup mount;
ORACLE instance started.
Total System Global Area 1.0033E+10 bytes
Fixed Size 2934696 bytes
Variable Size 1677723736 bytes
Database Buffers 8321499136 bytes
Redo Buffers 30617600 bytes
Database mounted.

3. Recover the database

SQL> recover database;
Media recovery complete.

4. Open the database with “alter database”

SQL> alter database open;
Database altered.

At this point, you should be able to access the database (OMR) and then have the EM environment up and running.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Database
Categories: DBA Blogs

Defining Resources in #GoldenGate Studio 12c

DBASolved - Fri, 2016-01-15 16:12

As I’ve been working with the beta of GoldenGate Studio 12c, I have tried to do simple things to see what will break and what is needed to make the process work. One of the things that I lilke about the studio is that prior to creating any solutions, mappings or projects, you can define what databases and GoldenGate instances will be used during the design process. What I want to show you in this blog post is how to create the database resource and the GoldenGate instance resource.

Creating a Resource:

To create a database resource, after opening GoldenGate Studio, go to the Resource tab. On this tab, you will see that is it empty. This is because no resources have been created yet.

In the left hand corner of the Resources tab, you should see a folder with a small arrow next to it. When you click on the arrow, you are provided with a context menu that provides you with three options for resources (Databases, Global Mappings, and GoldenGate Instances).


Database Resources:

Now that you know how to select what resrouce you want to create, lets create a database resource. To do this, select the database resource from the context menu. This will open up a one page wizard/dialog for you to fill out the connection information for the database you want to use as a resource.

You will notice there are a few fields that need to be populated. Provide the relative information you need to connect to the database. Once you all the information has been provided, you can test the connection to validate that it works before clicking ok.

Once you click ok, the database resource will be added to the resrouce tab under database header.

Notice that the database is automatically connected to once it is created. This allows you to immediately start using the resource for mappings and global mappings.

GoldenGate Instance Resources:

The GoldenGate Instance resources are a little more complex to configure. This is due to the requirement that the GoldenGate environment has to have the GoldenGate Monitoring Agent (aka. JAgent (12.1.3.0)) running. This is the same JAgent that is used with the OEM plug-in. If you need more information on how to install and configure the JAgent, you can find it at this here.

Now, to create a new GoldenGate Instance resource, you follow the same approach as you would to create a database resource; instead of selecting database; select GoldenGate Instance. This will open up the GoldenGate Instance wizard/dialog for you to fill out. Provide all the information requested.

In setting up the GoldenGate Instance, there are a few things that you need to provide. In my opinion, the names of the items requested in the GoldenGate Information section are misleading. To make this a bit easier, I’m providing an explanation of what each field means.

GoldenGate Version: This is the version of GoldenGate running with the JAgent
GoldenGate Database Type: Database which GoldenGate is running against. There are multiple opptions here
GoldenGate Port: This is the port number of the manager process
Agent Username: This is the username that is defined in $GGAGENT_HOME/cfg/Config.properties
Agent Password: This is the password that is created and stored in the datastore for the JAgent
Agent Port: This is the JMX port number that is defined in $GGAGENT_HOME/cfg/Config.properties

After providing all the required information, you can then perform a test connection. If the connection is successful, then you can click “ok” to create the GoldenGate Instance resource. If the connection fails, then you need to confirm all your settings.

Once all the resources you need for designing your GoldenGate architecture is done, you will see all the rsources under the Resource tab.

Now that you know how to create resources in GoldenGate Studio, it will help you in designing your replication flows.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Defining Resources in #GoldenGate Studio 12c

DBASolved - Fri, 2016-01-15 16:12

As I’ve been working with the beta of GoldenGate Studio 12c, I have tried to do simple things to see what will break and what is needed to make the process work. One of the things that I lilke about the studio is that prior to creating any solutions, mappings or projects, you can define what databases and GoldenGate instances will be used during the design process. What I want to show you in this blog post is how to create the database resource and the GoldenGate instance resource.

Creating a Resource:

To create a database resource, after opening GoldenGate Studio, go to the Resource tab. On this tab, you will see that is it empty. This is because no resources have been created yet.

In the left hand corner of the Resources tab, you should see a folder with a small arrow next to it. When you click on the arrow, you are provided with a context menu that provides you with three options for resources (Databases, Global Mappings, and GoldenGate Instances).


Database Resources:

Now that you know how to select what resrouce you want to create, lets create a database resource. To do this, select the database resource from the context menu. This will open up a one page wizard/dialog for you to fill out the connection information for the database you want to use as a resource.

You will notice there are a few fields that need to be populated. Provide the relative information you need to connect to the database. Once you all the information has been provided, you can test the connection to validate that it works before clicking ok.

Once you click ok, the database resource will be added to the resrouce tab under database header.

Notice that the database is automatically connected to once it is created. This allows you to immediately start using the resource for mappings and global mappings.

GoldenGate Instance Resources:

The GoldenGate Instance resources are a little more complex to configure. This is due to the requirement that the GoldenGate environment has to have the GoldenGate Monitoring Agent (aka. JAgent (12.1.3.0)) running. This is the same JAgent that is used with the OEM plug-in. If you need more information on how to install and configure the JAgent, you can find it at this here.

Now, to create a new GoldenGate Instance resource, you follow the same approach as you would to create a database resource; instead of selecting database; select GoldenGate Instance. This will open up the GoldenGate Instance wizard/dialog for you to fill out. Provide all the information requested.

In setting up the GoldenGate Instance, there are a few things that you need to provide. In my opinion, the names of the items requested in the GoldenGate Information section are misleading. To make this a bit easier, I’m providing an explanation of what each field means.

GoldenGate Version: This is the version of GoldenGate running with the JAgent
GoldenGate Database Type: Database which GoldenGate is running against. There are multiple opptions here
GoldenGate Port: This is the port number of the manager process
Agent Username: This is the username that is defined in $GGAGENT_HOME/cfg/Config.properties
Agent Password: This is the password that is created and stored in the datastore for the JAgent
Agent Port: This is the JMX port number that is defined in $GGAGENT_HOME/cfg/Config.properties

After providing all the required information, you can then perform a test connection. If the connection is successful, then you can click “ok” to create the GoldenGate Instance resource. If the connection fails, then you need to confirm all your settings.

Once all the resources you need for designing your GoldenGate architecture is done, you will see all the rsources under the Resource tab.

Now that you know how to create resources in GoldenGate Studio, it will help you in designing your replication flows.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

John King in Cleveland CTOWN for September meeting

Grumpy old DBA - Mon, 2015-08-17 04:50
For our September 18 friday afternoon quarterly meeting Northeast Ohio Oracle Users Group is lucky to have a great speaker Oracle Ace Director John King presenting.  It's the usual networking and free lunch beginning at noon at the Rockside Oracle offices followed by quick business meeting and presentations starting at 1 pm.

Full details on NEOOUG Sept 2015 presentations

November meeting we have Daniel Morgan woo hoo!
Categories: DBA Blogs

Two great speakers coming up in fall for NEOOUG quarterly meetings

Grumpy old DBA - Sun, 2015-08-02 18:03
Full details out soon over at NEOOUG website but we have John King from Colorado speaking in September and Daniel Morgan speaking in November for our quarterly meetings.

These guys are both Oracle Ace Directors and great dynamic speakers it should be a great time.

Complete information including bio's and abstracts for the sessions should be out soon.

Thanks John
Categories: DBA Blogs

Sorry for the radio silence but this post will make up for it ( or maybe not ha ha ) makes me grumpy

Grumpy old DBA - Mon, 2015-07-13 17:07
It's been a busy year apologies for the lack of posts.  I have learned a couple of good things oracle related and probably forgotten more but most of the stuff I work on is now somewhat confidential that doesn't make it easy to blog about.

Last week at the Federal Reserve Bank of Cleveland we had a very important visitor.  It was Janet Yellen the chairwoman of the well everything the chair of the Federal Reserve.

She was delivering a speech at the well known Cleveland City Club but stopped our bank.  My area was involved in doing two brief demo's of what we are working on she sat on the other side of the table across from me.  I did not have a speaking part ( just eye candy I guess ha ha ) but the people in my area who did speak were also in pictures.  Maybe next time?

I am kind of thinking about a presentation on sql plan baselines as being something to work on this winter?  There are a number of good ones out there already but the critical part of using a baseline is figuring out where to get one from ( and quickly ha ha ).


Categories: DBA Blogs