Feed aggregator

How to Clean Up GoldenGate Integrated Extract if the Delete Extract fails?

VitalSoftTech - Thu, 2016-06-16 15:13
If the extract process was not un-registered from the database then Cleaning Up GoldenGate Integrated Extract Using DBMS_CAPTURE_ADM.DROP_CAPTURE will be necessary.
Categories: DBA Blogs

Graph frequently executed SQL by FORCE_MATCHING_SIGNATURE

Bobby Durrett's DBA Blog - Thu, 2016-06-16 15:10

I made a new graph in my PythonDBAGraphs program. Here is an example with real data but the database name blanked out:

sql_matching_group_of_signatures_blog

My graphs are all sized for 1920 x 1080 monitors so I can see all the detail in the lines using my entire screen. The idea for this graph is to show how the performance of the queries that matter to the users changes as we add more load and data to this production database. I knew that this database had many queries with literals in their where clauses. I decided to pick a group of SQL by FORCE_MATCHING_SIGNATURE and to graph the average elapsed run time against the total number of executions.

I used this query to list all the SQL by signature:

column FORCE_MATCHING_SIGNATURE format 99999999999999999999

select FORCE_MATCHING_SIGNATURE,
sum(ELAPSED_TIME_DELTA)/1000000 total_seconds,
sum(executions_delta) total_executions,
count(distinct sql_id) number_sqlids,
count(distinct snap_id) number_hours,
min(PARSING_SCHEMA_NAME)
from DBA_HIST_SQLSTAT
group by FORCE_MATCHING_SIGNATURE
order by number_hours desc;

This is an edited version of the output – cut down to fit the page:

FORCE_MATCHING_SIGNATURE TOTAL_SECONDS TOTAL_EXECUTIONS NUMBER_HOURS
------------------------ ------------- ---------------- ------------
    14038313233049026256     22621.203         68687024         1019
    18385146879684525921    18020.9776        157888956         1013
     2974462313782736551    22875.4743           673687          993
    12492389898598272683    6203.78985         66412941          992
    14164303807833460050    4390.32324           198997          980
    10252833433610975622    6166.07675           306373          979
    17697983043057986874    17391.0907         25914398          974
    15459941437096211273    9869.31961          7752698          967
     2690518030862682918    15308.8561          5083672          952
     1852474737868084795    50095.5382          3906220          948
     6256114255890028779    380.095915          4543306          947
    16226347765919129545    9199.14289           215756          946
    13558933806438570935    394.913411          4121336          945
    12227994223267192558    369.784714          3970052          945
    18298186003132032869    296.887075          3527130          945
    17898820371160082776    184.125159          3527322          944
    10790121820101128903    2474.15195          4923888          943
     2308739084210563004    265.395538          3839998          941
    13580764457377834041    2807.68503         62923457          934
    12635549236735416450    1023.42959           702076          918
    17930064579773119626    2423.03972         61576984          914
    14879486686694324607     33.253284            17969          899
     9212708781170196788     7292.5267           126641          899
      357347690345658614    6321.51612           182371          899
    15436428048766097389     11986.082           334125          886
     5089204714765300123    6858.98913           190700          851
    11165399311873161545    4864.60469         45897756          837
    12042794039346605265    11223.0792           179064          835
    15927676903549361476    505.624771          3717196          832
     9120348263769454156    12953.0746           230090          828
    10517599934976061598     311.61394          3751259          813
     6987137087681155918    540.565595          3504784          809
    11181311136166944889      5018.309         59540417          808
      187803040686893225    3199.87327         12788206          800

I picked the ones that had executed in 800 or more hours. Our AWR has about 1000 hours of history so 800 hours represents about 80% of the AWR snapshots. I ended up pulling one of these queries out because it was a select for update and sometimes gets hung on row locks and skews the graph. So, the graph above has that one pulled out.

I based the graph above on this query:

select
sn.END_INTERVAL_TIME,
sum(ss.executions_delta) total_executions,
sum(ELAPSED_TIME_DELTA)/((sum(executions_delta)+1))
from DBA_HIST_SQLSTAT ss,DBA_HIST_SNAPSHOT sn
where ss.snap_id=sn.snap_id
and ss.INSTANCE_NUMBER=sn.INSTANCE_NUMBER
and ss.FORCE_MATCHING_SIGNATURE in
(
14038313233049026256,
18385146879684525921,
2974462313782736551,
12492389898598272683,
14164303807833460050,
10252833433610975622,
17697983043057986874,
15459941437096211273,
2690518030862682918,
6256114255890028779,
16226347765919129545,
13558933806438570935,
12227994223267192558,
18298186003132032869,
17898820371160082776,
10790121820101128903,
2308739084210563004,
13580764457377834041,
12635549236735416450,
17930064579773119626,
14879486686694324607,
9212708781170196788,
357347690345658614,
15436428048766097389,
5089204714765300123,
11165399311873161545,
12042794039346605265,
15927676903549361476,
9120348263769454156,
10517599934976061598,
6987137087681155918,
11181311136166944889,
187803040686893225
)
group by sn.END_INTERVAL_TIME
order by sn.END_INTERVAL_TIME;

Only time will tell if this really is a helpful way to check system performance as the load grows, but I thought it was worth sharing what I had done. Some part of this might be helpful to others.

Bobby

Categories: DBA Blogs

Q4 FY16 SAAS AND PAAS REVENUES WERE UP 66%, AND UP 68% IN CONSTANT CURRENCY

Oracle Press Releases - Thu, 2016-06-16 15:00
Press Release
Q4 FY16 SAAS AND PAAS REVENUES WERE UP 66%, AND UP 68% IN CONSTANT CURRENCY Q1 FY17 SaaS and PaaS Revenues Expected to be Up 75% to 80%

Redwood Shores, Calif.—Jun 16, 2016

Oracle Corporation (NYSE: ORCL) today announced fiscal 2016 Q4 results. Total Q4 Revenues were $10.6 billion, down 1% in U.S. dollars and flat in constant currency. Cloud plus On-Premise Software Revenues were $8.4 billion, flat in U.S. dollars and up 2% in constant currency. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $690 million, up 66% in U.S. dollars and up 68% in constant currency. Total Cloud revenues, including infrastructure as a service (IaaS), were $859 million, up 49% in U.S dollars and up 51% in constant currency. Operating Income was $4.0 billion, and Operating Margin was 37%. Non-GAAP Operating Income was $4.8 billion, and the non-GAAP Operating Margin was 45%. Net Income was $2.8 billion while non-GAAP Net Income was $3.4 billion. Earnings Per Share was $0.66, while non-GAAP Earnings Per Share was $0.81. Without the impact of the U.S. dollar strengthening compared to foreign currencies, Oracle’s reported GAAP Earnings Per Share would have been 2 cents higher and non-GAAP Earnings Per Share would have been 1 cent higher.

Short-term deferred revenues were $7.7 billion, up 6% in U.S. dollars and up 7% in constant currency compared with a year ago. Operating cash flow on a trailing twelve-month basis was $13.6 billion.

For fiscal 2016, Total Revenues were $37.0 billion, down 3% in U.S. dollars and up 2% in constant currency. Cloud plus On-Premise Software Revenues were $29.0 billion, down 2% in U.S. dollars and up 3% in constant currency. Cloud SaaS and PaaS revenues were $2.2 billion, up 49% in U.S. dollars and up 52% in constant currency. Total Cloud revenues, including IaaS, were $2.9 billion, up 36% in U.S dollars and up 40% in constant currency. Operating Income was $12.6 billion, and Operating Margin was 34%. Non-GAAP Operating Income was $15.8 billion and non-GAAP Operating Margin was 43%. Net Income was $8.9 billion while non-GAAP Net Income was $11.2 billion. Earnings Per Share was $2.07, while Non-GAAP Earnings Per Share was $2.61. Without the impact of the U.S. dollar strengthening compared to foreign currencies, Oracle’s reported GAAP and non-GAAP Earnings Per Share would have been 17 cents higher.

“Fourth quarter SaaS and PaaS revenue growth accelerated to 68% in constant currency, significantly higher than my guidance,” said Oracle CEO, Safra Catz. “SaaS and PaaS gross margins continued to improve throughout the year, exiting FY16 at 56%. Bookings in Q4 were also very strong enabling us to raise our guidance for Q1 SaaS and PaaS revenue growth, which we now expect to be between 75% and 80%.”

“We added more than 1,600 new SaaS customers and more than 2,000 new PaaS customers in Q4,” said Oracle CEO, Mark Hurd. “In Fusion ERP alone, we added more than 800 new cloud customers. Today, Oracle has nearly 2,600 Fusion ERP customers in the Oracle Public Cloud – that’s ten-times more cloud ERP customers than Workday.”

“We expect that the SaaS and PaaS hyper-growth we experienced in FY16 will continue on for the next few years,” said Oracle Executive Chairman and CTO, Larry Ellison. “That gives us a fighting chance to be the first cloud company to reach $10 billion in SaaS and PaaS revenue. We’re also very excited about the availability of version 2 of Oracle’s Infrastructure as a Service (IaaS)—which will enable us to speed up the growth of our IaaS business, which customers want to buy in conjunction with our SaaS and PaaS.”

The Board of Directors also declared a quarterly cash dividend of $0.15 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on July 6, 2016, with a payment date of July 27, 2016.

Q4 Fiscal 2016 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast of this event, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q4 results and fiscal 2016 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Pass Code: 28515799.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

"Safe Harbor" Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding our future guidance, and expectations of future SaaS, PaaS and IaaS growth, are "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our cloud computing strategy, including our Oracle Cloud SaaS, PaaS, IaaS and data as a service offerings, may not be successful. (2) If we are unable to develop new or sufficiently differentiated products and services, or to enhance and improve our products and support services in a timely manner or to position and/or price our products and services to meet market demand, customers may not buy new software licenses, cloud software subscriptions or hardware systems products or purchase or renew support contracts. (3) If the security measures for our software, hardware, services or Oracle Cloud offerings are compromised or if such offerings contain significant coding, manufacturing or configuration errors, we may experience reputational harm, legal claims and financial exposure. (4) We may fail to achieve our financial forecasts due to such factors as delays or size reductions in transactions, fewer large transactions in a particular quarter, fluctuations in currency exchange rates, delays in delivery of new products or releases or a decline in our renewal rates for support contracts. (5) Our international sales and operations subject us to additional risks that can adversely affect our operating results, including risks relating to foreign currency gains and losses. (6) Economic, geopolitical and market conditions, including the continued slow economic recovery in the U.S. and other parts of the world, can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) We have an active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. A detailed discussion of these factors and other risks that affect our business is contained in our SEC filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of June 16, 2016. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Patch analysis fails at step "Target Status" and at "Build Target Patch" for EM12cR5 GI+DB APR2016 Patches

Arun Bavera - Thu, 2016-06-16 14:10
Issue: Patch analysis fails at step "Target Status" and at "Build Target Patch" for EM12cR5 GI+DB APR2016 Patches
Requirement:
Patch to be applied for DB+GI for 12.1.0.2 using EM Patch Automation
Description DB PSU GI PSU Proactive Bundle Patch APR2016 22291127 (12.1.0.2.160419) 22646084 (12.1.0.2.160419) includes 22291127 22899531 - 1.7GBFrom <https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=417353674774707&id=1454618.1>
Reference Notes:
DB Patches:
12.1.0.2 Patch Set Updates - List of Fixes in each PSU (Doc ID 1924126.1)12.1.0.2 Patch Set - Availability and Known Issues (Doc ID 1683799.1)Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)From <http://oraforms.blogspot.com/2015/09/em12c-12105-upgrade-tasks.html> EM12c Patches:
Applying Enterprise Manager 12c Recommended Patches (Doc ID 1664074.1)
From <http://oraforms.blogspot.com/2015/09/em12c-12105-upgrade-tasks.html>
Enterprise Manager 12.1.0.5.0 (PS4) Master Bundle Patch List (Doc ID 2038446.1)
Enterprise Manager Patches required for setting up Provisioning, Patching and Cloning (Deployment Procedures) (Doc ID 427577.1)


System Details:
EM 12.1.0.5 DB Plugin 12.1.0.8
OMS side Patches Agent Side Patches OMS PSU APR Patch: 22570344
OMS System Side Bundle Patch: 23218275
Patch: BUG:21364690

Agent-Side 12.1.0.5.160531: (May 31, 2016) Patch 23124338
Agent-Side Monitoring 12.1.0.8.160531:(May 31, 2016) Patch 23217639
* Agent-Side Discovery 12.1.0.8.5: (November 30, 2015) Patch 22135601


Steps Taken:
 
1. We started with GRID Infrastructure PSU APR2016 Patch in the lab cluster 22646084 (12.1.0.2.160419).
During the EM Patch analysis we encountered mismatch between the targets configuration info collected from “EM Repository from agent” with “oplan” tool of OPatch.
Similar Bug(this issue  HAS having smallcase and UPPERcase with EM collection and Oplan collection, customer had uppercase shortname host in /etc/hosts)
BUG 20323468 - Run Configuration Compare fails with - invalid instances monitored by EM  -> BUG 21258331 - CONFIGURATIONCOMPARE SHOULD BE CASE INSENSITIVE REGARDING THE HOSTNAME
In our case we were seeing the discrapancies in targetname with FQDN and shortname


2. We then tried DB PSU 22291127 (12.1.0.2.160419), which failed in the step ‘Target Status Check’ which we determined as concurrency issue- we were able to overcome this by removing additional ClusterDBService and Patch analysis was successful for DB side patch only. We decided to create one more ClusterDBService to test again this DB PSU. If we retry this step one by one on each node we can overcome this concurrency.
Erros:Another datapatch session is currently running.
Waiting for that session to complete before continuing...
Queryable inventory could not determine the current opatch status.
Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual' and/or check the invocation log /ora01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_54049_2016_06_16_14_37_26/sqlpatch_invocation.log for the complete error.
Prereq check failed, exiting without installing any patches.
Error Codes:ORA-20009: Job Load_opatch_inventory_1execution failedWe have a somewhat workaround, but to fully automate we need fix for:
a. BUG 22204310 - CANNOT INVOKE DATAPATCH -PREREQ IN A CONCURRENT FASHION 
b. Bug 19178275 - fixes concurrent access to QI when multiple DBs access it in a shared OH  - is not backported yet
c. Bug 20879709 - concurrency with dbms_qopatch apis with fixed table implementation  -  is not backported yet




3. We again tried troubleshooting GI PSU, we are able to find the root cause, as 2 of the nodes were referring to wrong Oracle Central Inventory. Where the EM patch analysis was failing during compare configuration step. We fixed this.
The GI PSU Patch plan failed at the last step: In the EMOMS.log related errors as:
2016-06-15 15:46:35,246 [RJob Step 250083] ERROR dpGenerator.MultiPatchProcessor logp.251 - 22646084{, root=/%emd_emstagedir%/p22646084_600000000009300_226_0/oracle, oracle=/%emd_emstagedir%/p22646084_600000000009300_226_0/oracle} 2
2016-06-15 15:48:26,137 [RJob Step 250083] ERROR dpIntg.BuildPrereqProcAndConfig logp.251 - Exception while building the prerequisite procedure
oracle.oplan.sdk.oplan.OPlanInternalException: Unsupported Action Type:oracle.opatch.system.automation.jaxb.CommandAction subtype:preRACPatchApplyshell
Error: oracle.opatch.system.automation.jaxb.CommandAction subtype:preRACPatchApplyshell



Analysis:
The first hit you get on this issue is: Bug 21919097 : The Prerequsite Checks Required by OPlan Fails With Unsupported Action Type  - Closed as not a bug due to issue in-à BUG 20950255 - UNABLE TO APPLY APR PSU VIA EM - FAILS AT ANALYSIS AND UNABLE TO PROCEED  -- This is fixed in  EM DB Plugin 12.1.0.8  ---- Which we have already applied

Another, fix which is required for this issue is in Patch:
Patch 21364690: Tracking bug to deliver one-off patch for oplan bugs 20978207 and 20995318 DB Plugin (OMS) 12.1.0.8.0
Action: Fix available at: https://support.oracle.com/epmos/faces/PatchDetail?requestId=19551356&_afrLoop=418222758860502&patchId=21364690&_afrWindowMode=0&_adf.ctrl-state=n3gg18298_1124 – We need to apply this patch




Summary:
We have two issue here:
4. Concurrency issue with “Target Status Check” step – We can check with support whether its related and get fixes for the BUGS: BUG 22204310, Bug 19178275 and 20879709
5. GI PSU failing in last but one step  - “building the prerequisite procedure “ - we applied the fix for 21364690 and it fixed the issue.

















Categories: Development

SQLcl and Query Change Notification

Kris Rice - Thu, 2016-06-16 12:23
The database has had Query Change Notification for a while but to use it required a custom program. Such as Tim outlines on his blog https://oracle-base.com/articles/10g/dbms_change_notification_10gR2 Since SQLcl has Nashorn, now it can be integrated with a few lines of javascript to get notified when changes happen.  The script is below and posted.  The catch is QCN only works on Varchars and

Services -- 2 : Starting and Connecting to Services (non-RAC)

Hemant K Chitale - Thu, 2016-06-16 10:22
Continuing with the 12.1.0.2 non-RAC MultiTenant environment and two services demonstrated earlier,

I have restarted the environment today :

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 22:57:17

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$

[oracle@ora12102 Desktop]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 22:57:31 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 1040191008 bytes
Database Buffers 587202560 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database open;
alter pluggable database open
*
ERROR at line 1:
ORA-65000: missing or invalid pluggable database name


SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>

grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:00:11

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


The two manually created services (NEW_APP1 and NEW_APP2) don't startup automatically.

SQL> exec dbms_service.start_service('NEW_APP1');
BEGIN dbms_service.start_service('NEW_APP1'); END;

*
ERROR at line 1:
ORA-44773: Cannot perform requested service operation.
ORA-06512: at "SYS.DBMS_SERVICE_ERR", line 63
ORA-06512: at "SYS.DBMS_SERVICE", line 395
ORA-06512: at line 1


SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ sqlplus sys/oracle@PDB1 as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:03:05 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exec dbms_service.start_service('NEW_APP1');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('NEW_APP2');

PL/SQL procedure successfully completed.

SQL>


[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:04:36

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


When I attempted to start the custom service (that, per the previous post, was created in PDB1) when still connected to CDB$ROOT, the command fails.  I had to connect to PDB1 to start the service.

SQL> create tablespace hemant ;                           

Tablespace created.

SQL> create user hemant identified by hemant default tablespace hemant quota unlimited on hemant;

User created.

SQL> grant create session to hemant;

Grant succeeded.

SQL>
[oracle@ora12102 Desktop]$ tnsping NEW_APP1

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:09:00

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_APP1)))
OK (0 msec)
[oracle@ora12102 Desktop]$ tnsping NEW_APP2

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:09:05

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_APP2)))
OK (0 msec)
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP1

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:09:29 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@NEW_APP2

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:09:48 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:09:29 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> show conn_name
SP2-0158: unknown SHOW option "conn_name"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL>


The newly created user HEMANT belongs to the Database.  The user is not tied to a Service.  He can use either Service (whichever is running) to connect to the Database.  But the two Services have two different TNSNAMES.ORA entries --- differing by the SERVICE_NAME specification.

Thus, the DBA could configure some application servers to use one service name and other application servers to use another service name.

An example of such a configuration is where the first set of application servers could be for Finance Applications called "FINANCE" and the second set of servers could be for HR Applications (in the *same* database) called "HR".  Here I create the two services but start only the FINANCE service.

[oracle@ora12102 Desktop]$ sqlplus system/oracle@pdb1 

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:13:55 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exec dbms_service.create_service('FINANCE','FINANCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.start_service('FINANCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_service.create_service('HR','HR');

PL/SQL procedure successfully completed.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:15:45

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=9213))
Service "FINANCE" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_APP2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


Now the user should be able to connect to FINANCE, but not to HR.

[oracle@ora12102 Desktop]$ tnsping FINANCE

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:18:16

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = FINANCE)))
OK (0 msec)
[oracle@ora12102 Desktop]$ tnsping HR

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-JUN-2016 23:18:18

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = HR)))
OK (0 msec)
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@FINANCE

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:18:57 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Jun 16 2016 23:09:48 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@HR

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jun 16 23:19:23 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Enter user-name:


The user could connect to FINANCE but the request to HR returned ORA-12514.
(Notice how "tnsping HR" is successful but the connection is not ?  That is because tnsping only tests if the listener is running, it does not test if the database instance and service are both running).

So, using multiple services, the DBA can "provision" the same database to multiple applications.  The entry-point is the SERVICE_NAME,  not the USERNAME.   Users and Services are independent of each other.

.
.
.
Categories: DBA Blogs

ORA-00020: maximum number of processes exceeded

Learn DB Concepts with me... - Thu, 2016-06-16 09:45


ORA-00020: maximum number of processes



This error occurs when your total numbers of sessions connecting to oracle database has exceeded the max limit set in parameter file. Simplest way to overcome this error is to reset the max sessions value to a greater value than existing.Here is how to do it
 



oracle@LINUX201:[~] $ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:20:26 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded




oracle@LINUX201:[~] $ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:23:42 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded


Enter user-name:

Disconnected from ORACLE

I wasn't able get into the oracle database to kill some database session. So I tried to kill few sessions on OS to make my way into DB.

oracle@LINUX201:[~] $ ps -ef|grep oracle
.
.
.
.
.
.
.

oracle   64373     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   64540     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   64554     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   64633     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   64637     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
.
.
.
oracle   65186     1  0 Jun08 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   65190     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65192     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   65202     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65206     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
root     65407 65381  0 May16 pts/2    00:00:00 sudo -u oracle -i
oracle   65408 65407  0 May16 pts/2    00:00:00 -bash
oracle   65458 65408  0 May16 pts/2    00:00:00 sqlplus
oracle   65459 65458  0 May16 ?        00:00:00 oracleQPDEV (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   65518     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65520     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
oracle   65534     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)

oracle@LINUX201:[~] $ kill -9 64785
oracle@LINUX201:[~] $ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:26:25 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (500) exceeded


Enter user-name: ^C

Killing few processes on Linux :

oracle@LINUX201:[~] $ kill -9 65192 65085 64785 64777 64655 64653 64637


oracle@LINUX201:[~] $ ps -ef|grep 65192 65085 64785 64777 64655 64653 64637

.
.
.
.
oracle   50258     1  0 Jun07 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   50264     1  0 Jun07 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   50268     1  0 Jun07 ?        00:00:02 oracleQPDEV (LOCAL=NO)
.
.
.
oracle   64554     1  0 Jun08 ?        00:00:01 oracleQPDEV (LOCAL=NO)
oracle   64633     1  0 Jun08 ?        00:00:03 oracleQPDEV (LOCAL=NO)
oracle   65186     1  0 Jun08 ?        00:00:04 oracleQPDEV (LOCAL=NO)
oracle   65190     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65202     1  0 Jun08 ?        00:00:00 oracleQPDEV (LOCAL=NO)
oracle   65206     1  0 Jun08 ?        00:00:02 oracleQPDEV (LOCAL=NO)
.
.
.

oracle@LINUX201:[~] $ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Jun 13 10:30:07 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show parameter process;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     500
processor_group_name                 string


Now reset the max processes to a greater value:

SQL> alter system set processes=1200 scope=spfile;

System altered.

SQL>  show parameter process;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     500
processor_group_name                 string
SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
QPDEV     READ WRITE

This will need a restart to take affect


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup;
ORACLE instance started.

Total System Global Area 3206836224 bytes
Fixed Size                  2257520 bytes
Variable Size            1275071888 bytes
Database Buffers         1912602624 bytes
Redo Buffers               16904192 bytes
Database mounted.
Database opened.

SQL> show parameter process

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     1
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     1000
log_archive_max_processes            integer     4
processes                            integer     1200
processor_group_name                 string

SQL>
Categories: DBA Blogs

Oracle Partner Community - EPM BI Big Data Bulletin June 2016

When you prototype and pilot a Big Data Hadoop solution, you may start on the cloud or a cluster of commodity servers.  However, I have heard from many partners that the transition to production...

We share our skills to maximize your revenue!
Categories: DBA Blogs

database option - backup and restore

Pat Shuff - Thu, 2016-06-16 02:07
Backup and recovery abilities are arguably the most critical skills required of a database administrator. Recovery Manager (RMAN) is Oracle’s standard backup and recovery tool; every Oracle DBA should be familiar with utilizing RMAN. Some DBAs use alternate tools since RMAN is an Oracle specific tool to backup data in a database. Alternatives include Veritas Backup Exec, Comvault Sympana, Legato Networker, EMC and NetApp tools, and other packages. I am not going to list books and software packages in this blog. When I did a search on Safari Books search for rman we get 9 books published in 2016, 16 in 2015, and 20 in 2014. There are also a ton of blogs so I suggest that you go with your favorite blog and search for rman or backup. There are hundreds of different ways to backup a database and restore the database as well as optimize how much space the database takes up in cold storage.

The important things to consider when you look at backup and recovery are

  • full or incremental backups
  • backing up to disk, tape, or cloud
  • replicating data to another site with disk mirroring, Data Guard, or Golden Gate
  • hot backup or cold backup along with middleware and file system backup synchronization
  • recovery point objective and recovery time objective
  • compression, deduplication, and encryption of data at rest
  • backup windows and restore windows

It is important to note that when you purchase DBaaS, independent of any edition, you get backup done for you based on the options you select at creation. When you create a database you can opt for no backup, local backup, and full backups. The no backup can be used for development and test instances. We might just want a sandbox to play in and don't care about keeping this data so that we can restore. If we loose the data for any reason we can recreate it from our production system. When you select local backups you get incremental backups daily at 2am and a full backup Sunday morning at 2am. This gives you a seven day window for recovery so that you can restore data back to your last full backup with daily incrementals. These backups go to the /u03 file system on the instance that you create. Nothing is copied off the instance that you create so a complete failure of the system could result in potential data loss. The full backup does an incremental to /u03 daily and a full backup Sunday morning at 2am to /u03 as well. Prior to the full backup, the backup file is copied to the Cloud Object Storage Container that you created prior to creating the database. When you created the database you specify the days that you want to retain backups. If, for example, you ask for a 90 day backup you get 13 full backups copied to the object storage. If you have a Java Cloud Service connected to this database, the Java configuration and war files are also copied to this same area. The backup is automatically done for you and can be reconfigured and done manually using the cloud specific commands. Refer to the Using Oracle Database Cloud - Database as a Service Documentation (chapter 6) to understand how to backup using the cloud commands to keep backups in sync with the automated utilities rather than doing an rman manually.

You can generate a backup before you make a change to the database with the following command

sudo /var/opt/oracle/bkup_api/bkup_api bkup_start
This command must be executed from the opc account and not the oracle account because it needs root roles to store data and update the logs associated with cron jobs. To restore the database from the last backup you execute the following command
sudo dbaascli orec --args -latest
You can list the database backups that exist with
sudo dbaascli orec --args -list
To restore from a specific backup you execute
sudo dbaascli orec --args -pitr backup-tag
where backup-tag is the name listed with the -list command.

The dbaascli command shuts down the database, extracts and restores configuration files, prepares for recovery, performs the recovery, and restarts the database instance after recovery. You can use the rman utility to restore individual tables or tablespaces as usual but tools exist to manage the rman repository that are accessible to do scheduled backups, full backups, and restores.

One of the nice features of RMAN is that you can duplicate a database to an alternate location and restore from an alternate location. For example, you can take a backup of your on premise database and write to the Oracle Object Cloud Service. You can then create a database in the cloud using the DBaaS and load the data into this database from your backup in cloud object storage. You can also provision an on premise database, execute replication commands to configure the second database as a Data Guard replica, initiate the backup of your on premise instance, initiate the restore on your cloud instance, and configure Data Guard to maintain the log shipping, and kick off the Data Guard service on both instances. We will cover this in an upcoming blog.

Using DBaaS does get rid of the discussion of snap mirror backups. This is where you take the tablespace.dbf files, shutdown the database, copy these files to another database, and ingest the dbf file into the second database. Many users like to use this method for backup and recovery because it works for MySQL and Postgress. It is not a recommended backup and restore mechanism for an Oracle database. Synchronization issues can result if changes are being made and data is partitioned across multiple dbf files. Using tools like EMC or NetApp mirroring software work well for VMWare to clone data between data centers. This technology does not work to the cloud and data replication is more difficult if the source and target are from different hardware vendors. The Oracle database also has problems ingesting this data at times and fails to startup from mirror instances if file consistency is not maintained between the system.dbf, user.dbf, and other tablespace files.

Data compression is also a tricky issue. The Oracle database has a compression option that it uses to compress data to minimize disk usage in the tablespace files. If you take this compressed dbf file and then try to again compress it, it typically just consumes excess processor cycles and takes upto eight times as long to restore the data once it is doubly compressed. It is important to not only look at your backup window but your restore window. It might take an hour to backup and compress your data but it might take upto 8 hours to restore it. One of the key performance benefits that Oracle brings to the table is using ZFS storage as a back end for a database. Since the database knows what the ZFS storage appliance can and can't do, it can offload the compression processing and algorithms to the storage appliance. Unfortunately, this does not work for other storage vendors. The cloud database instance uses this to it's advantage so data is automatically compressed when copies are done to the object storage. The compression does not take processing power from the database engine but is offloaded to the storage engine.

You can customize the backup definition and parameters. All of the files are located in /var/opt/oracle/ocde/assistants/bkup/ and the bkup.cfg file defines how backup operates. It is important to note that there is a different command line set of tools if your database is a RAC configuration since you have to reconfigure files on both systems that comprise your cluster but they both use this directory and file to configure the backup process, timing, and procedures.

In summary, database backup is part of the database as a service on all levels. The foundation tool that is used is RMAN. There are command line api tools that allow you to backup and restore without having to learn RMAN. If you want to reconfigure and drill down into the depths and bowels of RMAN you can do this. The backups are automatically encrypted using your wallet that we talked about a couple of days ago. You can also use Enterprise Manager to run RMAN for your cloud service just like you do with your on premise database instances. The important thing to remember is that you don't need to learn RMAN unless you want to. If you deploy a database on infrastructure as a service you have to learn RMAN because it is not automated for you or configured. If you use Amazon RDS, backups are done by doing a disk clone to another zone. You can not setup Data Guard replication to the RDS instance because you don't have file system access to copy the change logs from your on premise database to the RDS storage. You can do this with EC2 and S3 but you loose the automated configurations and services that you get with database as a service. You will also need to purchase Advanced Security to do secure backups to IaaS instances from all cloud vendors but not with DBaaS or PaaS from Oracle.

Links for 2016-06-15 [del.icio.us]

Categories: DBA Blogs

ORACLE AND SNAPCHAT MEASURE IMPACT OF SNAP ADS ON SALES IN STORES

Oracle Press Releases - Wed, 2016-06-15 18:38
Press Release
ORACLE AND SNAPCHAT MEASURE IMPACT OF SNAP ADS ON SALES IN STORES New Measurement Research for Snapchat Advertisers Finds 92% of Ad Campaigns Saw Positive Offline Sales Increases

Redwood Shores Calif—Jun 15, 2016

Oracle and Snapchat today announced a partnership to measure the impact of digital advertising in the physical world. By teaming together, Oracle Data Cloud and Snapchat measure incremental store sales resulting from marketing campaigns run on Snapchat. By analyzing the impact of campaigns on in-store sales, Oracle Data Cloud will help consumer packaged goods (CPG) advertisers quantify and improve their ongoing marketing efforts on Snapchat.
 
Oracle Data Cloud also released new research analyzing the in-store sales lift from 12 CPG campaigns run on Snapchat for leading cosmetic, personal care, cleaning, packaged food, and beverage brands. Among the findings:
 
92 percent of the Snapchat ad campaigns drove a positive lift in in-store sales
 
Snapchat ad campaigns exceeded Oracle Data Cloud norms on all key metrics with sales lift being the primary driver.
 
“We are delighted to be working with one of the fastest-growing platforms in history to help Snapchat advertisers understand and improve the effectiveness of their campaigns,” said Eric Roza, Senior Vice President of the Oracle Data Cloud. “By combining offline sales data with cutting-edge analytics, we can help Snapchat advertisers maximize their return on investment and drive higher profits.”
 
“We’ve been listening closely to advertisers and delivering the data they need,” said Clement Xue, Global Head of Revenue Operations at Snapchat. “This research shows that Snapchat is more effective at driving sales for advertisers, and we’re delighted to offer this measurement solution going forward.”
 
Oracle Data Cloud is the largest global data-as-a-service solution, with access to more than $3 trillion dollars in consumer transaction data, 2 billion global consumer profiles, and 1,500+ data partners. Oracle Data Cloud integrates that data with more than 200 major media companies, including publisher exchanges, ad networks, DSPs, DMPs, and agency trading desks.
 
Contact Info
Erik Kingham
Oracle
650-506-8298
erik.kingham@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Get Started
Talk to a Press Contact

Erik Kingham

  • 650-506-8298

5 Days Expert Oracle Security Training In Paris - 20th June 2016

Pete Finnigan - Wed, 2016-06-15 17:35

I will be teaching 5 days on my Oracle security classes in Paris from 20th June to 24th June with Oracle University at their offices and training suite. Details of the Oracle Security Event and how to register on Oracles....[Read More]

Posted by Pete On 06/06/16 At 09:59 AM

Categories: Security Blogs

Amis Conference June 2nd and 3rd

Pete Finnigan - Wed, 2016-06-15 17:35

I will be at the Amis conference next Friday in Leiden not far from Amsterdam in Holland. The conference is held over two days, June 2nd and 3rd But I will be there just on the Friday due to other....[Read More]

Posted by Pete On 26/05/16 At 11:28 AM

Categories: Security Blogs

Are Zero Days or Bugs Fixed by CPU The Worst?

Pete Finnigan - Wed, 2016-06-15 17:35

I spoke yesterday about compartmentalising Oracle Security and one element that comes out of this is the need to consider what you are trying to achieve; secure actual data and also secure the platform. In general applying security patches will....[Read More]

Posted by Pete On 25/05/16 At 12:51 PM

Categories: Security Blogs

Compartmentalised Oracle Security

Pete Finnigan - Wed, 2016-06-15 17:35

I have been teaching security classes about Oracle Security for many years and they are very popular and I teach many classes per year around the world; mostly in the UK and EEC but I also venture to the Middle....[Read More]

Posted by Pete On 24/05/16 At 12:43 PM

Categories: Security Blogs

New Oracle Security Paper on Non-Production and Delphix

Pete Finnigan - Wed, 2016-06-15 17:35

I was asked by Delphix earlier this year to review their product with a particular focus on Oracle security of course. I wrote two papers; the first about Data Masking and Delphix and the second about securing data in non-production....[Read More]

Posted by Pete On 23/05/16 At 11:23 AM

Categories: Security Blogs

Oracle Security And Delphix Paper and Video Available

Pete Finnigan - Wed, 2016-06-15 17:35

I did a webinar with Delphix on 30th March 2016 on USA time. This was a very good session with some great questions at the end from the attendees. I did a talk on Oracle Security in general, securing non-production....[Read More]

Posted by Pete On 01/04/16 At 03:43 PM

Categories: Security Blogs

3 Days of Oracle Security Training In York, UK

Pete Finnigan - Wed, 2016-06-15 17:35

I have just updated the public Oracle Security training dates on our Oracle Security training page to remove the public trainings that have already taken place this year and to add a new training in York for 2016. After the....[Read More]

Posted by Pete On 31/03/16 At 01:53 PM

Categories: Security Blogs

Oracle Data Masking and Secure Test Databases

Pete Finnigan - Wed, 2016-06-15 17:35

My daily work is helping my customers secure their Oracle databases. I do this in many ways from performing detailed security audits of key databases to helping in design of secure lock down policies to creating audit trails to teaching....[Read More]

Posted by Pete On 14/03/16 At 08:45 AM

Categories: Security Blogs

Using Jupyter Notebooks with Big Data Discovery 1.2

Rittman Mead Consulting - Wed, 2016-06-15 10:00

New in Big Data Discovery 1.2 is the addition of BDD Shell, an integration point with Python. This exposes the datasets and BDD functionality in a Python and PySpark environment, opening up huge possibilities for advanced data science work on BDD datasets. With the ability to push back to Hive and thus BDD data modified in this environment, this is important functionality that will make BDD even more useful for navigating and exploring big data.

Whilst BDD Shell is command-line based, there’s also the option to run Jupyter Notebooks (previous iPython Notebooks) which is a web-based interactive “Notebook”. This lets you build up scripts exploring and manipulating the data within BDD, using both Python and Spark. The big advantage of this over the command-line interface is that a ‘Notebook’ enables you to modify and re-run commands, and then once correct retain them as a fully functioning script for future use.

The Big Data Lite virtual machine is produced by Oracle for demo and development purposes, and hosts all the components that you’d find on the Big Data Appliance, all configured and integrated for use. Version 4.5 was released recently, which included BDD 1.2.

For information how on to set up BDD Shell and Jupyter Notebooks, see this previous post. For the purpose of this article I’m running Jupyter on port 18888 so as not to clash with Hue:

cd /u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell
/u01/anaconda2/bin/jupyter-notebook --port 18888

Important points to note:

  • It’s important that you run this from the bdd-shell folder, otherwise the BDD shell won’t initialise properly
  • Jupyter by default only listens locally, so you need to use a web browser local to the server, or use port-forwarding if you want to access Jupyter from your local web browser.

Go to http://localhost:18888 in your web browser, and from the New menu select a Python 2 notebook:

You should then see an empty notebook, ready for use:

The ‘cell’ (grey box after the In [ ]:) is where you enter code to run – type in execfile('ipython/00-bdd-shell-init.py') and press shift-Enter. This will execute it – if you don’t press shift you just get a newline. Whilst it’s executing you’ll notice the line prefix changes from [ ] to [*], and in the terminal window from which you launched Jupyter you’ll see some output related to the BDD Shell starting

WARNING: User-defined SPARK_HOME (/usr/lib/spark) overrides detected (/usr/lib/spark/).
WARNING: Running spark-class from user-defined location.
spark.driver.cores is set but does not apply in client mode.

Now back in the Notebook, enter the following – use Enter, not Shift-enter, between lines:

dss = bc.datasets()
dss.count

Now press shift-enter to execute it. This uses the pre-defined bc BDD context to get the datasets object, and return a count from it.

BDD_Shell_example_04

By clicking the + button on the toolbar, using the up and down arrows on the toolbar, and the Code/Markdown dropdown, it’s possible to insert “cells” which are not code but instead commentary on what the code is. This way you can produce fully documented, but executable, code objects.

From the File menu give the notebook a name, and then Close and Halt, which destroys the Jupyter process (‘kernel’) that was executing the BDD Shell session. Back at the Jupyter main page, you’ll note that a ipynb file has been created, which holds the notebook definition and can be downloaded, sent to colleagues, uploaded to blogs to share, saved in source control, and so on. Here’s the file for the notebook above – note that it’s hosted on gist, which automagically previews it as a Notebook, so click on Raw to see the actual code behind it.

The fantastically powerful thing about the Notebooks is that you can modify and re-run steps as you go — but you never lose the history of how you got somewhere. Most people will be familar with learning or exploring a tool and its capabilities and eventually getting it to work – but no idea how they got there. Even for experienced users of a tool, being able to prove how to replicate a final result is important for (a) showing the evidence for how they got there and (b) enabling others to take that work and build on it.

With an existing notebook file, whether a saved one you created or one that someone sent you, you can reopen it in Jupyter and re-execute it, in order to replicate the results previously seen. This is an important tenet of [data] science in general – show your workings, and it’s great that Big Data Discovery supports this option. Obviously, showing the count of datasets is not so interesting or important to replicate. The real point here is being able to take datasets that you’ve got in BDD, done some joining and wrangling on already taking advantage of the GUI, and then dive deep into the data science and analytics world of things like Spark MLLib, Pandas, and so on. As a simple example, I can use a couple of python libraries (installed by default with Anaconda) to plot a correlation matrix for one of my BDD datasets:

jupyter_corr_matrix

As well as producing visualisations or calculations within BDD shell, the real power comes in being able to push the modified data back into Hive, and thus continue to work with it within BDD.

With Jupyter Notebooks not only can you share the raw notebooks for someone else to execute, you can export the results to HTML, PDF, and so on. Here’s the notebook I started above, developed out further and exported to HTML – note how you can see not only the results, but exactly the code that I ran in order to get them. In this I took the dataset from BDD, added a column into it using a pandas windowing function, and then saved it back to a new Hive table:
(you can view the page natively here, and the ipynb here)

.gist table { margin-bottom: 0; }

Once the data’s been written back to Hive from the Python processing, I ran BDD’s data_processing_CLI to add the new table back into BDD

/u01/bdd/v1.2.0/BDD-1.2.0.31.813/dataprocessing/edp_cli/data_processing_CLI --table updated_accident_data

And once that’s run, I can then continue working with the data in BDD:

This workflow enables a continual loop of data wrangling, enrichment, advanced processing, and visualisation – all using the most appropriate tools for the job.

You can also use BDD Shell/Jupyter as another route for loading data into BDD. Whilst you can import CSV and XLS files into BDD directly through the web GUI, there are limitations – such as an XLS workbook with multiple sheets has to be imported one sheet at a time. I had a XLS file with over 40 sheets of reference data in it, which was not going to be time-efficient to load one at a time into BDD.

Pandas supports a lot of different input types – including Excel files. So by using Pandas to pull the data in, then convert it to a Spark dataframe I can write it to Hive, from where it can be imported to BDD. As before, the beauty of the Notebook approach is that I could develop and refine the code, and then simply share the Notebook here

.gist table { margin-bottom: 0; }

The post Using Jupyter Notebooks with Big Data Discovery 1.2 appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator