Feed aggregator

Are analytic RDBMS and data warehouse appliances obsolete?

Curt Monash - Sun, 2016-08-28 20:28

I used to spend most of my time — blogging and consulting alike — on data warehouse appliances and analytic DBMS. Now I’m barely involved with them. The most obvious reason is that there have been drastic changes in industry structure:

Simply reciting all that, however, begs the question of whether one should still care about analytic RDBMS at all.

My answer, in a nutshell, is:

Analytic RDBMS — whether on premises in software, in the form of data warehouse appliances, or in the cloud – are still great for hard-core business intelligence, where “hard-core” can refer to ad-hoc query complexity, reporting/dashboard concurrency, or both. But they aren’t good for much else.

To see why, let’s start by asking: “With what do you want to integrate your analytic SQL processing?”

  • If you want to integrate with relational OLTP (OnLine Transaction Processing), your OLTP RDBMS vendor surely has a story worth listening to. Memory-centric offerings MemSQL and SAP HANA are also pitched that way.
  • If you want to integrate with your SAP apps in particular, HANA is the obvious choice.
  • If you want to integrate with other work you do in the Amazon cloud, Redshift is worth a look.

Beyond those cases, a big issue is integration with … well, with data integration. Analytic RDBMS got a lot of their workloads from ELT or ETLT, which stand for Extract/(Transform)/Load/Transform. I.e., you’d load data into an efficient analytic RDBMS and then do your transformations, vs. the “traditional” (for about 10-15 years of tradition) approach of doing your transformations in your ETL (Extract/Transform/Load) engine. But in bigger installations, Hadoop often snatches away that part of the workload, even if the rest of the processing remains on a dedicated analytic RDBMS platform such as Teradata’s.

And suppose you want to integrate with more advanced analytics — e.g. statistics, other predictive modeling/machine learning, or graph analytics? Well — and this both surprised and disappointed me — analytic platforms in the RDBMS sense didn’t work out very well. Early Hadoop had its own problems too. But Spark is doing just fine, and seems poised to win.

My technical observations around these trends include:

  • Advanced analytics commonly require flexible, iterative processing.
  • Spark is much better at such processing than earlier Hadoop …
  • … which in turn is better than anything that’s been built into an analytic RDBMS.
  • Open source/open standards and the associated skill sets come into play too. Highly vendor-proprietary DBMS-tied analytic stacks don’t have enough advantages over open ones.
  • Notwithstanding the foregoing, RDBMS-based platforms can still win if a big part of the task lies in fancy SQL.

And finally, if a task is “partly relational”, then Hadoop or Spark often fit both parts.

  • They don’t force you into using SQL or everything, nor into putting all your data into relational schemas, and that flexibility can be a huge relief.
  • Even so, almost everybody who uses those uses some SQL, at least for initial data extraction. Those systems are also plenty good enough at SQL for joining data to reference tables, and all that other SQL stuff you’d never want to give up.

But suppose you just want to do business intelligence, which is still almost always done over relational data structures? Analytic RDBMS offer the trade-offs:

  • They generally still provide the best performance or performance/concurrency combination, for the cost, although YMMV (Your Mileage May Vary).
  • One has to load the data in and immediately structure it relationally, which can be an annoying contrast to Hadoop alternatives (data base administration can be just-in-time) or to OLTP integration (less or no re-loading).
  • Other integrations, as noted above, can also be weak.

Suppose all that is a good match for your situation. Then you should surely continue using an analytic RDBMS, if you already have one, and perhaps even acquire one if you don’t. But for many other use cases, analytic RDBMS are no longer the best way to go.

Finally, how does the cloud affect all this? Mainly, it brings one more analytic RDBMS competitor into the mix, namely Amazon Redshift. Redshift is a simple system for doing analytic SQL over data that was in or headed to the Amazon cloud anyway. It seems to be quite successful.

Bottom line: Analytic RDBMS are no longer in their youthful prime, but they are healthy contributors in middle age. Mainly, they’re still best-of-breed for supporting demanding BI.

Are analytic RDBMS and data warehouse appliances obsolete?

DBMS2 - Sun, 2016-08-28 20:28

I used to spend most of my time — blogging and consulting alike — on data warehouse appliances and analytic DBMS. Now I’m barely involved with them. The most obvious reason is that there have been drastic changes in industry structure:

Simply reciting all that, however, begs the question of whether one should still care about analytic RDBMS at all.

My answer, in a nutshell, is:

Analytic RDBMS — whether on premises in software, in the form of data warehouse appliances, or in the cloud – are still great for hard-core business intelligence, where “hard-core” can refer to ad-hoc query complexity, reporting/dashboard concurrency, or both. But they aren’t good for much else.

To see why, let’s start by asking: “With what do you want to integrate your analytic SQL processing?”

  • If you want to integrate with relational OLTP (OnLine Transaction Processing), your OLTP RDBMS vendor surely has a story worth listening to. Memory-centric offerings MemSQL and SAP HANA are also pitched that way.
  • If you want to integrate with your SAP apps in particular, HANA is the obvious choice.
  • If you want to integrate with other work you do in the Amazon cloud, Redshift is worth a look.

Beyond those cases, a big issue is integration with … well, with data integration. Analytic RDBMS got a lot of their workloads from ELT or ETLT, which stand for Extract/(Transform)/Load/Transform. I.e., you’d load data into an efficient analytic RDBMS and then do your transformations, vs. the “traditional” (for about 10-15 years of tradition) approach of doing your transformations in your ETL (Extract/Transform/Load) engine. But in bigger installations, Hadoop often snatches away that part of the workload, even if the rest of the processing remains on a dedicated analytic RDBMS platform such as Teradata’s.

And suppose you want to integrate with more advanced analytics — e.g. statistics, other predictive modeling/machine learning, or graph analytics? Well — and this both surprised and disappointed me — analytic platforms in the RDBMS sense didn’t work out very well. Early Hadoop had its own problems too. But Spark is doing just fine, and seems poised to win.

My technical observations around these trends include:

  • Advanced analytics commonly require flexible, iterative processing.
  • Spark is much better at such processing than earlier Hadoop …
  • … which in turn is better than anything that’s been built into an analytic RDBMS.
  • Open source/open standards and the associated skill sets come into play too. Highly vendor-proprietary DBMS-tied analytic stacks don’t have enough advantages over open ones.
  • Notwithstanding the foregoing, RDBMS-based platforms can still win if a big part of the task lies in fancy SQL.

And finally, if a task is “partly relational”, then Hadoop or Spark often fit both parts.

  • They don’t force you into using SQL or everything, nor into putting all your data into relational schemas, and that flexibility can be a huge relief.
  • Even so, almost everybody who uses those uses some SQL, at least for initial data extraction. Those systems are also plenty good enough at SQL for joining data to reference tables, and all that other SQL stuff you’d never want to give up.

But suppose you just want to do business intelligence, which is still almost always done over relational data structures? Analytic RDBMS offer the trade-offs:

  • They generally still provide the best performance or performance/concurrency combination, for the cost, although YMMV (Your Mileage May Vary).
  • One has to load the data in and immediately structure it relationally, which can be an annoying contrast to Hadoop alternatives (data base administration can be just-in-time) or to OLTP integration (less or no re-loading).
  • Other integrations, as noted above, can also be weak.

Suppose all that is a good match for your situation. Then you should surely continue using an analytic RDBMS, if you already have one, and perhaps even acquire one if you don’t. But for many other use cases, analytic RDBMS are no longer the best way to go.

Finally, how does the cloud affect all this? Mainly, it brings one more analytic RDBMS competitor into the mix, namely Amazon Redshift. Redshift is a simple system for doing analytic SQL over data that was in or headed to the Amazon cloud anyway. It seems to be quite successful.

Bottom line: Analytic RDBMS are no longer in their youthful prime, but they are healthy contributors in middle age. Mainly, they’re still best-of-breed for supporting demanding BI.

Categories: Other

Presenting at Oracle Open World 2016

Galo Balda's Blog - Sun, 2016-08-28 18:51

oow-160x160-im-speaking-3093277

Just a short post to announce that I’ll be presenting an User Group Forum session. Thanks ODTUG!

SQLcl: A Modern Command Line Interface to the Oracle Database [UGF5641]

“In this session learn about the new Java-based command line interface that takes advantage of Oracle SQL Developer’s scripting engine. It delivers a modern command line interface that is backward compatible with SQL*Plus but also introduces new commands and features that have been missing for a long time. In this session, explore the new inline editing, query history, aliasing, output formatting, DDL generation, and scripting options that set SQLcl apart from its predecessor.”

Sunday, Sep 18, 8:00 a.m. – 8:45 a.m.| Moscone South—302

See you there!


Filed under: Open World, SQLcl Tagged: Open World, SQLcl
Categories: DBA Blogs

Oracle Database Cloud (DBaaS) Performance Consistency - Part 3

Randolf Geist - Sun, 2016-08-28 16:43
This is the third part of this installment, comparing the performance consistency of the DBaaS cloud offering with a dedicated physical host. This time instead of burning CPU using logical I/O only (see part 2) the test harness executes a SQL statement that has to perform physical I/O mostly, and to be more precise this round is a read-only test, so doesn't include any modification to data (except the logging and any other background system generated write activity, like maintaining ASH /AWR data etc.).

In order to maximize the physical I/O part the database instance was configured with a minimum sized buffer cache (16k block size in this case) and the following script was executed as many times as CPUs were available:

declare
  n number;
begin
  loop
    select /*+
              leading(t_o)
              use_nl(t_i)
              index(t_o)
              index(t_i)
          */
          sum(t_i.n)
          into n
    from
          t_o
        , t_i&tabname t_i
    where
          t_o.id_fk = t_i.id;
    insert into timings(testtype, thread_id, ts) values ('&testtype', &thread_id, systimestamp);
    commit;
  end loop;
end;
/

The outer table T_O was created like this:

create table t_o (id primary key, id_fk)
organization index
as
select
       rownum as id
     , case mod(rownum, 2) + 1
       when 1
       then mod(rownum, &tab_size / 20)
       else &tab_size / 10 - mod(rownum, &tab_size / 20) + 1
       end as id_fk
from dual

connect by level <= &tab_size;

Each thread got its dedicated inner table of the Nested Loop join, created like this:

create table t_i' || i || ' (id not null, n, filler)
pctfree 99 pctused 1
tablespace &tbs
as
select cast(rownum as integer) as id,
cast(rownum as number) as n,
cast(rpad('x', 200) as varchar2(200)) as filler
from dual

connect by level <= &tab_size / 10;

create index t_i' || i || '_idx on t_i' || i || ' (id, filler) pctfree 99 tablespace &tbs;


Due to the way the data was crafted and thanks to the Nested Loop join batching performed by Oracle this ensured that the sessions were performing "db file parallel read" I/O as much and hard as possible, so were submitting multiple I/O requests using a single call and taking advantage of asynchronous I/O where possible.

When running with 8 threads on those 8 CPUs on the DBaaS service this resulted in approx. 30.000 IOPS for those 16K blocks (approx. 480MB / sec throughput). Note that I got these 30.000 IOPS for 16K blocks also on a different instance with just 4 CPUs (2 OCPUs) and 4 threads, so it looks like this is a common 16K blocks IOPS limit for a DBaaS instance independently from the CPU count without further tweaking (I didn't attempt to optimize this in any way but used the service as provided / configured by the Oracle Cloud). It looks like this is more a throughput limitation at 480MB / sec than an IOPS limitation, because when using a 4K blocksize I was able to get 60.000 IOPS (middle value), but the IOPS rate was much more fluctuating (between 20.000 and 90.000) whereas the 30.000 IOPS using 16K blocksize was pretty much stable.

On the physical host I got approx. half of these IOPS (roughly 15.500) for those 16K blocks (approx. 250MB / sec throughput), which is very likely the throughput limit for the rather outdated hardware configuration, albeit using a rather modern Samsung SSD drive, but also meant that running at 4 threads I got a comparable number of IOPS per thread as in the DBaaS setup. Using a 4K blocksize the physical host maxed out at (very stable) 52.000 IOPS.

The overall results look like this:

DBaaS:



Physical host:




Again the graph is supposed to show how many of the runs deviated how much from the overall median runtime. Similar to the previous, logical I/O only test this time the DBaaS service shows a significantly different profile, with an even larger spread of deviation up to almost 9 percent from the median runtime, whereas the physical host only shows significant deviation up to 2.5 percent.

Again, like in the previous test, the physical host shows more extreme outliers than the DBaaS service.

The same graph on a per day basis this time shows significant differences between the days for the DBaaS service:
The physical host shows a very common pattern, except for the first day:
Looking at the individual performance of each thread the DBaaS shows a significant variation in performance per day:
The physical host shows a pretty consistent performance pattern, but interestingly the different threads show different, but consistent runtimes:
The next test round will include physical writes.

ORACLE_MAINTAINED Set From “_ORACLE_SCRIPT” parameter

Michael Dinh - Sun, 2016-08-28 09:26

A huge thank you to Pete Finnigan for his blog post.

Hacking Oracle 12c COMMON Users

Not only is the implementation of ORACLE_MAINTAINED not ideal, it’s a possible security risk.

My tip to you is to always exit the session before performing any object creation.

From my pain as you can see, looks like “_ORACLE_SCRIPT was not reset when new users where created.

Also ORACLE_MAINTAINED Objects Don’t Export

Learned quite a few new thing this weekend and can now lay to rest.

test:(SYS@test):PRIMARY> alter session set "_ORACLE_SCRIPT"=false;

Session altered.

test:(SYS@test):PRIMARY> create user newdemo identified by demo;

User created.

test:(SYS@test):PRIMARY> select username,oracle_maintained from dba_users order by 2;

USERNAME             O
-------------------- -
TEST1                N
MDINH                N
TESTING              N
DEMO                 N
NEWDEMO              N
GSMUSER              Y
AUDSYS               Y
ANONYMOUS            Y
DBSNMP               Y
XDB                  Y
APPQOSSYS            Y
GSMADMIN_INTERNAL    Y
SYSBACKUP            Y
OUTLN                Y
SYSDG                Y
SYSKM                Y
SYSTEM               Y
XS$NULL              Y
OJVMSYS              Y
ORACLE_OCM           Y
DIP                  Y
SYS                  Y
GSMCATUSER           Y

23 rows selected.

test:(SYS@test):PRIMARY> alter session set "_ORACLE_SCRIPT"=true;

Session altered.

test:(SYS@test):PRIMARY> create user ndemo identified by demo;

User created.

test:(SYS@test):PRIMARY> select username,oracle_maintained from dba_users order by 2;

USERNAME             O
-------------------- -
DEMO                 N
MDINH                N
TEST1                N
NEWDEMO              N
TESTING              N
GSMUSER              Y
SYSKM                Y
XS$NULL              Y
OJVMSYS              Y
APPQOSSYS            Y
ORACLE_OCM           Y
XDB                  Y
DBSNMP               Y
SYSDG                Y
DIP                  Y
OUTLN                Y
ANONYMOUS            Y
SYSBACKUP            Y
AUDSYS               Y
NDEMO                Y
GSMADMIN_INTERNAL    Y
GSMCATUSER           Y
SYS                  Y
SYSTEM               Y

24 rows selected.

test:(SYS@test):PRIMARY>

Multiple instances of processes resulting in deadlock

Tom Kyte - Sun, 2016-08-28 06:06
Hi TOMS, I have gone through the deadlocks questions posted on your website.Thanks for the answers you have shared. However I have a scenario where there are multiple instances of a process are running and causing a deadlock. 1. Design of proc...
Categories: DBA Blogs

How to Insert Random PKs from Child Tables Into Parent Table's FKs

Tom Kyte - Sun, 2016-08-28 06:06
How can I randomly generate and insert an "x" number of rows into a parent table (x being independent of the size of the parent and children tables), where it (the parent table) might have multiple FK columns from two or more child tables? For ins...
Categories: DBA Blogs

SQL questions involving combinations

Tom Kyte - Sun, 2016-08-28 06:06
Dear Tom... I have two questions that are somewhat related to each other in concept: 1) Consider the following table which represents a state history per each person identified by PERSON_ID: SQL> CREATE TABLE status_history 2 (per...
Categories: DBA Blogs

Links for 2016-08-27 [del.icio.us]

Categories: DBA Blogs

Oracle JET Hybrid Receiving Mobile Cloud Service Notification

Andrejus Baranovski - Sun, 2016-08-28 01:50
Oracle JET Hybrid enabled with Cordova Push plugin can receive notifications sent from Oracle Mobile Cloud Service (MCS). You should read how to setup infrastructure in this article - Your first Push notification based Oracle JET Hybrid application!. I will focus on implementation steps and will provide working sample app for download.

Sample JET hybrid application provides login functionality. During login we register device ID with MCS service, this allows to receive notifications:


After login is completed, default dashboard page is displayed. At this point, application is ready to receive notification (even if application will be closed or mobile device screen is locked):


Notification from MCS can be sent programmatically from API implementation. I will show this in the next posts. For now we can use MCS UI to test if notification is working. Mobile Backend provides functionality to send notifications. I can target notification to specific user:


If mobile device registration was successful during login in JET Hybrid, MCS wil send notification and you should see confirmation message (if there will be error, this means registration from Oracle JET Hybrid was unsuccessful):


In few seconds you should see notification received and displayed in notifications screen:


Select notification and it will navigate directly to the app and pass notification message. This would allow to parse it do certain action in the app, for example refresh data, etc.:


To verify if registration with MCS was done correctly during login, you can check logs in Mobile Backend. There must be REST POST /register call logged, just after GET /login:


It would not work to send notification, if there is no Client defined in MCS Backend. You can read more about it from the article mentioned above. Make sure to specify all keys correctly, pointing to Google Cloud Messaging (if you are targeting for Android):


Let's take a look into implementation on JET Hybrid side. Login success callback invokes method to register device to receive push notifications from MCS:


Make sure Cordova Push plugin is installed into your app, otherwise PushNotification will be undefined. It gets registration ID from Google and makes another call to handshake with MCS:


I'm invoking wrapper method, which in turn have access to MCS SDK module in my JET app:


MCS SDK method registerForNotifications is invoked, where REST POST call is made to register device with MCS and allow to receive notifications:


Download sample application (this contains only JET implementation JS/HTML, you should copy it into root src folder of your JET hybrid app) - jet_mcs_notifications.zip.

ORACLE_MAINTAINED Objects Don’t Export

Michael Dinh - Sun, 2016-08-28 00:03

What a disaster this is with 12c full database export and ORACLE_MAINTAINED objects.

Somehow, oracle thinks MDINH is an ORACLE_MAINTAINED user and user MDINH does not get exported.

———

Why Can an Object Not Be Exported? Expdp of SYSTEM User’s Table Returns ORA-39166 or ORA-31655 (Doc ID 2114233.1)

There are certain system generated schemas that are not exportable using exp or expdp because they contain Oracle-managed data and metadata. SYS, MDSYS, and ORDSYS are some examples.

———

From Blog about  (ORACLE_MAINTAINED Column) DEMO was ORACLE_MAINTAINED user as well.

For testing purposes, I dropped and recreated the user to test export which removed ORACLE_MAINTAINED flag.

Still, this does not explained how MDINH came to be ORACLE_MAINTAINED user.

Has anyone experience this and has a solution or work around?

test:(SYS@test):PRIMARY> select username,oracle_maintained from dba_users order by 1;

USERNAME             O
-------------------- -
ANONYMOUS            Y
APPQOSSYS            Y
AUDSYS               Y
DBSNMP               Y
DEMO                 N
DIP                  Y
GSMADMIN_INTERNAL    Y
GSMCATUSER           Y
GSMUSER              Y
MDINH                Y
OJVMSYS              Y
ORACLE_OCM           Y
OUTLN                Y
SYS                  Y
SYSBACKUP            Y
SYSDG                Y
SYSKM                Y
SYSTEM               Y
TEST1                N
TESTING              N
XDB                  Y
XS$NULL              Y

22 rows selected.

test:(SYS@test):PRIMARY>

++++++++++

$ expdp parfile=expdp_full.par

Export: Release 12.1.0.2.0 - Production on Sat Aug 27 21:38:15 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** parfile=expdp_full.par
Startup took 1 seconds
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
     Estimated 1 TABLE_DATA objects in 2 seconds
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
     Estimated 16 TABLE_DATA objects in 0 seconds
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
     Estimated 9 TABLE_DATA objects in 0 seconds
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
     Estimated 0 TABLE_DATA objects in 1 seconds
Total estimation using BLOCKS method: 2.125 MB
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
     Completed 1 MARKER objects in 0 seconds
Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
     Completed 1 MARKER objects in 0 seconds
Processing object type DATABASE_EXPORT/TABLESPACE
     Completed 4 TABLESPACE objects in 0 seconds
Processing object type DATABASE_EXPORT/PROFILE
     Completed 1 PROFILE objects in 0 seconds
Processing object type DATABASE_EXPORT/SYS_USER/USER
     Completed 1 USER objects in 0 seconds

*** There are 4 users but only 3 are exported.
Processing object type DATABASE_EXPORT/SCHEMA/USER
     Completed 3 USER objects in 0 seconds

Processing object type DATABASE_EXPORT/RADM_FPTM
     Completed 1 RADM_FPTM objects in 0 seconds
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
     Completed 6 PROC_SYSTEM_GRANT objects in 0 seconds
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
     Completed 2 SYSTEM_GRANT objects in 0 seconds
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
     Completed 4 ROLE_GRANT objects in 0 seconds
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
     Completed 6 DEFAULT_ROLE objects in 0 seconds
Processing object type DATABASE_EXPORT/SCHEMA/ON_USER_GRANT
     Completed 7 ON_USER_GRANT objects in 0 seconds
Processing object type DATABASE_EXPORT/RESOURCE_COST
     Completed 1 RESOURCE_COST objects in 0 seconds
Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
     Completed 1 TRUSTED_DB_LINK objects in 1 seconds
Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
     Completed 1 SYNONYM objects in 5 seconds
Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
     Completed 2 SYNONYM objects in 0 seconds
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
     Completed 3 PROCACT_SYSTEM objects in 1 seconds
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
     Completed 23 PROCOBJ objects in 5 seconds
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
     Completed 3 PROCACT_SYSTEM objects in 1 seconds
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
     Completed 11 PROCACT_SCHEMA objects in 0 seconds
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE
     Completed 1 TABLE objects in 5 seconds
Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER
     Completed 1 MARKER objects in 2 seconds
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
     Completed 16 TABLE objects in 16 seconds
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE
     Completed 9 TABLE objects in 26 seconds
Processing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER
     Completed 1 MARKER objects in 1 seconds
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
     Completed  MARKER objects in  seconds
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
     Completed  MARKER objects in  seconds
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
     Completed  MARKER objects in  seconds
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
     Completed  MARKER objects in  seconds
Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
     Completed 1 MARKER objects in 2 seconds
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
     Completed 3 PROCACT_SCHEMA objects in 2 seconds
Processing object type DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY_ENABLE
     Completed 2 AUDIT_POLICY_ENABLE objects in 1 seconds
Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
     Completed 1 MARKER objects in 0 seconds
ORA-31693: Table data object "SYS"."KU$_USER_MAPPING_VIEW" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
. . exported "SYS"."AUD$"                                22.57 KB       0 rows in 0 seconds
. . exported "SYS"."DAM_CLEANUP_EVENTS$"                 7.187 KB       0 rows in 0 seconds
. . exported "SYS"."DAM_CLEANUP_JOBS$"                   7.171 KB       0 rows in 0 seconds
. . exported "SYS"."DAM_CONFIG_PARAM$"                   6.531 KB      14 rows in 0 seconds
. . exported "SYS"."TSDP_ASSOCIATION$"                   5.898 KB       0 rows in 0 seconds
. . exported "SYS"."TSDP_CONDITION$"                     5.890 KB       0 rows in 0 seconds
. . exported "SYS"."TSDP_FEATURE_POLICY$"                5.906 KB       0 rows in 0 seconds
. . exported "SYS"."TSDP_PARAMETER$"                     5.953 KB       1 rows in 0 seconds
. . exported "SYS"."TSDP_POLICY$"                        5.921 KB       1 rows in 0 seconds
. . exported "SYS"."TSDP_PROTECTION$"                    6.320 KB       0 rows in 0 seconds
. . exported "SYS"."TSDP_SENSITIVE_DATA$"                8.437 KB       0 rows in 0 seconds
. . exported "SYS"."TSDP_SENSITIVE_TYPE$"                6.320 KB       0 rows in 0 seconds
. . exported "SYS"."TSDP_SOURCE$"                        6.312 KB       0 rows in 0 seconds
. . exported "SYS"."TSDP_SUBPOL$"                        6.328 KB       1 rows in 0 seconds
. . exported "SYSTEM"."REDO_DB"                          25.59 KB       1 rows in 0 seconds
. . exported "SYSTEM"."REDO_LOG"                         26.34 KB       0 rows in 0 seconds
ORA-31693: Table data object "SYS"."FGA_LOG$FOR_EXPORT" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
. . exported "SYSTEM"."SCHEDULER_JOB_ARGS"               8.390 KB       0 rows in 1 seconds
. . exported "SYSTEM"."SCHEDULER_PROGRAM_ARGS"           9.523 KB      12 rows in 0 seconds
ORA-31693: Table data object "SYS"."AUDTAB$TBS$FOR_EXPORT" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."DBA_SENSITIVE_DATA" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."DBA_TSDP_POLICY_PROTECTION" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."NACL$_ACE_EXP" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."NACL$_HOST_EXP" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."NACL$_WALLET_EXP" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
     Completed 0 DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 0 seconds
     Completed 16 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 0 seconds
     Completed 2 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 1 seconds
     Completed 0 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA objects in 0 seconds
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /oradata/dpump/expdp_full.dmp
Job "SYSTEM"."SYS_EXPORT_FULL_01" completed with 8 error(s) at Sat Aug 27 21:39:50 2016 elapsed 0 00:01:34

++++++++++

$ impdp parfile=impdp_full.par

Import: Release 12.1.0.2.0 - Production on Sat Aug 27 21:43:12 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Startup took 0 seconds
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01":  /******** AS SYSDBA parfile=impdp_full.par
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
     Completed 1 MARKER objects in 0 seconds
Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
     Completed 1 MARKER objects in 1 seconds
Processing object type DATABASE_EXPORT/TABLESPACE
ORA-31684: Object type TABLESPACE:"UNDOTBS1" already exists
ORA-31684: Object type TABLESPACE:"TEMP" already exists
ORA-31684: Object type TABLESPACE:"USERS" already exists
     Completed 4 TABLESPACE objects in 0 seconds
Processing object type DATABASE_EXPORT/PROFILE
     Completed 1 PROFILE objects in 0 seconds
Processing object type DATABASE_EXPORT/SYS_USER/USER
     Completed 1 USER objects in 0 seconds
Processing object type DATABASE_EXPORT/SCHEMA/USER
     Completed 3 USER objects in 0 seconds
Processing object type DATABASE_EXPORT/RADM_FPTM
     Completed 1 RADM_FPTM objects in 0 seconds
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
     Completed 6 PROC_SYSTEM_GRANT objects in 2 seconds

Looks like MDINH was not exported.

Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
ORA-39083: Object type SYSTEM_GRANT failed to create with error:
ORA-01917: user or role 'MDINH' does not exist
Failing sql is:
GRANT CREATE SYNONYM TO "MDINH"
     Completed 2 SYSTEM_GRANT objects in 1 seconds
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
ORA-39083: Object type ROLE_GRANT failed to create with error:
ORA-01917: user or role 'MDINH' does not exist
Failing sql is:
 GRANT "RESOURCE" TO "MDINH"
ORA-39083: Object type ROLE_GRANT failed to create with error:
ORA-01917: user or role 'MDINH' does not exist
Failing sql is:
 GRANT "CONNECT" TO "MDINH"
     Completed 4 ROLE_GRANT objects in 0 seconds
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
ORA-39083: Object type DEFAULT_ROLE:"MDINH" failed to create with error:
ORA-01918: user 'MDINH' does not exist
Failing sql is:
 ALTER USER "MDINH" DEFAULT ROLE ALL
     Completed 6 DEFAULT_ROLE objects in 0 seconds
Processing object type DATABASE_EXPORT/SCHEMA/ON_USER_GRANT
ORA-39083: Object type ON_USER_GRANT failed to create with error:
ORA-31625: Schema MDINH is needed to import this object, but is unaccessible
ORA-01435: user does not exist
Failing sql is:
 GRANT INHERIT PRIVILEGES ON USER "MDINH" TO "PUBLIC"
     Completed 7 ON_USER_GRANT objects in 0 seconds
Processing object type DATABASE_EXPORT/RESOURCE_COST
     Completed 1 RESOURCE_COST objects in 0 seconds
Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
     Completed 1 TRUSTED_DB_LINK objects in 0 seconds
Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
ORA-31684: Object type SYNONYM:"PUBLIC"."MORE_RECS_TBL" already exists
     Completed 1 SYNONYM objects in 0 seconds
Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
ORA-39083: Object type SYNONYM:"MDINH"."TCUSTMER" failed to create with error:
ORA-01917: user or role '' does not exist
Failing sql is:
CREATE EDITIONABLE SYNONYM "MDINH"."TCUSTMER" FOR "DEMO"."TCUSTMER"
ORA-39083: Object type SYNONYM:"MDINH"."TCUSTORD" failed to create with error:
ORA-01917: user or role '' does not exist
Failing sql is:
CREATE EDITIONABLE SYNONYM "MDINH"."TCUSTORD" FOR "DEMO"."TCUSTMER"
     Completed 2 SYNONYM objects in 0 seconds
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
>>> Cannot set an SCN larger than the current SCN. If a Streams Capture configuration was imported then the Apply that processes the captured messages needs to be dropped and recreated. See My Oracle Support article number 1380295.1.
     Completed 3 PROCACT_SYSTEM objects in 1 seconds
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
     Completed 23 PROCOBJ objects in 1 seconds
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
     Completed 3 PROCACT_SYSTEM objects in 25 seconds
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
ORA-39083: Object type PROCACT_SCHEMA failed to create with error:
ORA-31625: Schema MDINH is needed to import this object, but is unaccessible
ORA-01435: user does not exist
Failing sql is:
BEGIN
sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','CURRENT_SCHEMA'), export_db_name=>'CI2', inst_scn=>'806486');COMMIT; END;
     Completed 11 PROCACT_SCHEMA objects in 1 seconds
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE
     Completed 1 TABLE objects in 0 seconds
Processing object type DATABASE_EXPORT/EARLY_POST_INSTANCE_IMPCALLOUT/MARKER
     Completed 1 MARKER objects in 0 seconds
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
     Completed 16 TABLE objects in 1 seconds
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
. . imported "SYS"."AMGT$DP$AUD$"                        22.57 KB       0 rows in 0 seconds
. . imported "SYS"."AMGT$DP$DAM_CLEANUP_EVENTS$"         7.187 KB       0 rows in 0 seconds
. . imported "SYS"."AMGT$DP$DAM_CLEANUP_JOBS$"           7.171 KB       0 rows in 0 seconds
. . imported "SYS"."AMGT$DP$DAM_CONFIG_PARAM$"           6.531 KB      14 rows in 0 seconds
. . imported "SYS"."DP$TSDP_ASSOCIATION$"                5.898 KB       0 rows in 0 seconds
. . imported "SYS"."DP$TSDP_CONDITION$"                  5.890 KB       0 rows in 0 seconds
. . imported "SYS"."DP$TSDP_FEATURE_POLICY$"             5.906 KB       0 rows in 0 seconds
. . imported "SYS"."DP$TSDP_PARAMETER$"                  5.953 KB       1 rows in 0 seconds
. . imported "SYS"."DP$TSDP_POLICY$"                     5.921 KB       1 rows in 0 seconds
. . imported "SYS"."DP$TSDP_PROTECTION$"                 6.320 KB       0 rows in 0 seconds
. . imported "SYS"."DP$TSDP_SENSITIVE_DATA$"             8.437 KB       0 rows in 0 seconds
. . imported "SYS"."DP$TSDP_SENSITIVE_TYPE$"             6.320 KB       0 rows in 0 seconds
. . imported "SYS"."DP$TSDP_SOURCE$"                     6.312 KB       0 rows in 0 seconds
. . imported "SYS"."DP$TSDP_SUBPOL$"                     6.328 KB       1 rows in 0 seconds
. . imported "SYSTEM"."REDO_DB_TMP"                      25.59 KB       1 rows in 0 seconds
. . imported "SYSTEM"."REDO_LOG_TMP"                     26.34 KB       0 rows in 0 seconds
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE
ORA-39342: Internal error - failed to import internal objects tagged with SCHEDULER due to ORA-00910: specified length too long for its datatype.
ORA-39083: Object type TABLE:"SYSTEM"."SCHEDULER_PROGRAM_ARGS_TMP" failed to create with error:
ORA-00910: specified length too long for its datatype
Failing sql is:
CREATE TABLE "SYSTEM"."SCHEDULER_PROGRAM_ARGS_TMP" ("OWNER" VARCHAR2(128 BYTE) NOT NULL ENABLE, "PROGRAM_NAME" VARCHAR2(128 BYTE) NOT NULL ENABLE, "ARGUMENT_NAME" VARCHAR2(128 BYTE), "ARGUMENT_POSITION" NUMBER NOT NULL ENABLE, "ARGUMENT_TYPE" VARCHAR2(257 BYTE), "METADATA_ATTRIBUTE" VARCHAR2(19 BYTE), "DEFAULT_VALUE" VARCHAR2(32767 BYTE), "DEFA
ORA-39083: Object type TABLE:"SYSTEM"."SCHEDULER_JOB_ARGS_TMP" failed to create with error:
ORA-00910: specified length too long for its datatype
Failing sql is:
CREATE TABLE "SYSTEM"."SCHEDULER_JOB_ARGS_TMP" ("OWNER" VARCHAR2(128 BYTE), "JOB_NAME" VARCHAR2(128 BYTE), "ARGUMENT_NAME" VARCHAR2(128 BYTE), "ARGUMENT_POSITION" NUMBER, "ARGUMENT_TYPE" VARCHAR2(257 BYTE), "VALUE" VARCHAR2(32767 BYTE), "ANYDATA_VALUE" "SYS"."ANYDATA" , "OUT_ARGUMENT" VARCHAR2(5 BYTE)) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
     Completed 9 TABLE objects in 0 seconds
Processing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER
     Completed 1 MARKER objects in 1 seconds
Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
ORA-39342: Internal error - failed to import internal objects tagged with AUDIT_TRAILS due to ORA-01403: no data found
ORA-01403: no data found.
     Completed 1 MARKER objects in 1 seconds
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
     Completed 3 PROCACT_SCHEMA objects in 0 seconds
Processing object type DATABASE_EXPORT/AUDIT_UNIFIED/AUDIT_POLICY_ENABLE
     Completed 2 AUDIT_POLICY_ENABLE objects in 1 seconds
Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
     Completed 1 MARKER objects in 0 seconds
     Completed 16 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 1 seconds
Job "SYS"."SYS_IMPORT_FULL_01" completed with 16 error(s) at Sat Aug 27 21:43:55 2016 elapsed 0 00:00:41

ODA X6 command line and Web Console

Yann Neuhaus - Sat, 2016-08-27 14:33

The ODA X6 comes with a new command line (odacli) which replaces oakcli, and with a small web console which can display information about the appliance, the databases and the provisioning jobs. It also has the possibility to create a database, but this is for next blog post. In this post I’ll show which information are displayed once the ODA is installed.

The examples here come from ODA X6 version: 12.1.2.7.0

Appliance

The first screen is about the appliance information, the ones that you define when installed the ODA:

CaptureODAWEB001

The same information can be displayed from command line with odacli describe-appliance:

[root@odax6m ~]# odacli describe-appliance
 
Appliance Information
----------------------------------------------------------------
ID: bb8f0eec-0f5c-4319-bade-75770848b923
Platform: OdaliteM
Data Disk Count: 2
CPU Core Count: 20
Created: Aug 26, 2016 2:51:26 PM
 
System Information
----------------------------------------------------------------
Name: odax6m
Domain Name: oracle.democenter.arrowcs.ch
Time Zone: Europe/Zurich
DB Edition: EE
DNS Servers: 172.22.1.9
NTP Servers: 172.22.1.9
 
Disk Group Information
----------------------------------------------------------------
DG Name Redundancy Percentage
------------------------- ------------------------- ------------
Data Normal 75
Reco Normal 25

An important thing to note here is that the choice between Standard Edition and Enterprise Edition is at appliance level: you cannot mix.
There’s also no mention of virtualization because ODA X6 2S and 2M are only bare-metal.

odacli list-networks

[root@odax6m ~]# odacli list-networks
 
ID Name NIC IP Address Subnet Mask Gateway
---------------------------------------- -------------------- ---------- ------------------ ------------------ ------------------
ffcf7d89-8074-4342-9f19-5e72ed695ce7 Private-network priv0 192.168.16.24 255.255.255.240
71a422bc-39d3-483c-b79b-ffe25129dfd2 Public-network btbond1 172.22.1.23 255.255.255.224 172.22.1.2

I’ve no Auto Service Request configured here:
[root@odax6m ~]# odacli describe-asr
Aug 27, 2016 8:56:33 PM com.oracle.oda.dcscli.commands.AsrCommand$getAsr run
SEVERE: No asr found

Databases

The second screen is about the databases:

CaptureODAWEB002

From command line you have information about the ORACLE_HOMEs and databases.

[root@odax6m ~]# odacli list-dbhomes
 
ID Name DB Version Home Location
---------------------------------------- -------------------- ---------- ---------------------------------------------
67419075-d1f9-4c2e-85b1-c74430e35120 OraDB12102_home1 12.1.0.2 /u01/app/oracle/product/12.1.0.2/dbhome_1
cf76a90b-f9e3-44b2-9b43-56111c1785e4 OraDB12102_home2 12.1.0.2 /u01/app/oracle/product/12.1.0.2/dbhome_2
adcbe8bf-f26f-4ab7-98a1-0abcd4412305 OraDB11204_home1 11.2.0.4 /u01/app/oracle/product/11.2.0.4/dbhome_1

[root@odax6m ~]# odacli list-databases
 
ID DB Name DB Version CDB Class Shape Storage Status
---------------------------------------- ---------- ---------- ---------- -------- -------- ---------- ----------
4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9 dbee1 12.1.0.2 false OLTP odb4 ACFS Configured
5564ea51-fc93-46f2-9188-c13c23caba94 odb1s 12.1.0.2 true OLTP odb1s ACFS Configured
26c2213d-5992-4b2b-94b0-2d0f4d0f9c2d dbee11g1 11.2.0.4 false OLTP odb2 ACFS Configured

You can get more detail about one database:

CaptureODAWEB0022

[root@odax6m ~]# odacli describe-dbhome -i 67419075-d1f9-4c2e-85b1-c74430e35120
 
DB Home details
----------------------------------------------------------------
ID: 67419075-d1f9-4c2e-85b1-c74430e35120
Name: OraDB12102_home1
Version: 12.1.0.2
Home Location: /u01/app/oracle/product/12.1.0.2/dbhome_1
Created: Aug 26, 2016 2:51:26 PM

[root@odax6m ~]# odacli describe-database -i 4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9
 
Database details
----------------------------------------------------------------
ID: 4c182ffb-3e4a-45c0-a6c6-15d5e9b7b2b9
Description: dbee1
DB Name: dbee1
DB Version: 12.1.0.2
DBID: 2933563624
CDB: false
PDB Name:
PDB Admin User Name:
Class: OLTP
Shape: odb4
Storage: ACFS
CharacterSet: DbCharacterSet(characterSet=AL32UTF8, nlsCharacterset=AL16UTF16, dbTerritory=AMERICA, dbLanguage=AMERICAN)
Home ID: 67419075-d1f9-4c2e-85b1-c74430e35120
Console Enabled: false
Created: Aug 26, 2016 2:51:26 PM

Activity

Here is the log of what has been done on the ODA:

CaptureODAWEB003

[root@odax6m ~]# odacli list-jobs
 
ID Description Created Status
---------------------------------------- ------------------------------ ------------------------- ----------
1b99d278-6ab4-4ead-a5f8-f112c74a8f97 Provisioning service creation Aug 26, 2016 2:51:26 PM Success
f0ac9a2c-ba37-412c-8a81-9cc7cb301417 Database service creation with db name: odb1s Aug 26, 2016 4:03:39 PM Success
dec37817-feb7-46e5-b991-b23362268cb1 Database service creation with db name: dbee11g1 Aug 26, 2016 5:09:33 PM Success

And we have more info about the steps executed for one job:

CaptureODAWEB004

Same in command line:

[root@odax6m ~]# odacli describe-job -i 1b99d278-6ab4-4ead-a5f8-f112c74a8f97
 
Job details
----------------------------------------------------------------
ID: 1b99d278-6ab4-4ead-a5f8-f112c74a8f97
Description: Provisioning service creation
Status: Success
Created: 26.08.2016 14:51:26
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ------------------------- ------------------------- ----------
Setting up Network Aug 26, 2016 2:51:27 PM Aug 26, 2016 2:51:27 PM Success
Creating group :asmdba Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :asmoper Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :asmadmin Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :dba Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :dbaoper Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating group :oinstall Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating user :grid Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Creating user :oracle Aug 26, 2016 2:51:38 PM Aug 26, 2016 2:51:38 PM Success
Setting up ssh equivalance Aug 26, 2016 2:51:39 PM Aug 26, 2016 2:51:39 PM Success
Gi Home creation Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:57:54 PM Success
Creating GI home directories Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:54:49 PM Success
Cloning Gi home Aug 26, 2016 2:54:49 PM Aug 26, 2016 2:57:54 PM Success
GI stack creation Aug 26, 2016 2:57:54 PM Aug 26, 2016 3:08:44 PM Success
Configuring GI Aug 26, 2016 2:57:54 PM Aug 26, 2016 2:58:21 PM Success
Running GI root scripts Aug 26, 2016 2:58:21 PM Aug 26, 2016 3:05:03 PM Success
Running GI config assistants Aug 26, 2016 3:05:03 PM Aug 26, 2016 3:05:34 PM Success
Creating RECO disk group Aug 26, 2016 3:10:02 PM Aug 26, 2016 3:10:16 PM Success
Creating volume reco Aug 26, 2016 3:10:16 PM Aug 26, 2016 3:10:26 PM Success
Creating volume datdbee1 Aug 26, 2016 3:10:26 PM Aug 26, 2016 3:10:35 PM Success
Creating ACFS filesystem for RECO Aug 26, 2016 3:10:35 PM Aug 26, 2016 3:10:42 PM Success
Creating ACFS filesystem for DATA Aug 26, 2016 3:10:42 PM Aug 26, 2016 3:10:49 PM Success
Db Home creation Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:13:40 PM Success
Creating DbHome Directory Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:10:49 PM Success
Extract DB clones Aug 26, 2016 3:10:49 PM Aug 26, 2016 3:12:29 PM Success
Enable DB options Aug 26, 2016 3:12:29 PM Aug 26, 2016 3:12:38 PM Success
Clone Db home Aug 26, 2016 3:12:38 PM Aug 26, 2016 3:13:37 PM Success
Run Root DB scripts Aug 26, 2016 3:13:37 PM Aug 26, 2016 3:13:37 PM Success
Database Service creation Aug 26, 2016 3:13:40 PM Aug 26, 2016 3:19:43 PM Success
Database Creation Aug 26, 2016 3:13:40 PM Aug 26, 2016 3:17:58 PM Success
Running DataPatch Aug 26, 2016 3:18:33 PM Aug 26, 2016 3:19:43 PM Success
create Users tablespace Aug 26, 2016 3:19:43 PM Aug 26, 2016 3:19:46 PM Success

Yes, this is the ODA installation. Half an hour to setup the OS, install Grid Infrastructure, setup the storage and create a first database.

Refresh

Those that all the screens are not refreshed automatically even when you navigate through them. Don’t forget the ‘Refresh’ button (circular arrow) on top-right.

JSON

You can also build your own interface from the JSON format:

[root@odax6m ~]# odacli list-jobs -j
[ {
"jobId" : "1b99d278-6ab4-4ead-a5f8-f112c74a8f97",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472215886601,
"description" : "Provisioning service creation"
}, {
"jobId" : "f0ac9a2c-ba37-412c-8a81-9cc7cb301417",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472220219016,
"description" : "Database service creation with db name: odb1s"
}, {
"jobId" : "dec37817-feb7-46e5-b991-b23362268cb1",
"status" : "Success",
"message" : null,
"createTimestamp" : 1472224173747,
"description" : "Database service creation with db name: dbee11g1"
} ]

So what?

ODA is for easy and fast provisioning and the GUI that was missing is finally there. Of course, it looks very simple, but that’s the goal of the appliance: setup quickly a standardized environment. ODA X6-2S is cheap and has good performance for small databases. You may find equivalent hardware, but can you build and install a stable hardware, OS and database in 30 minutes?

 

Cet article ODA X6 command line and Web Console est apparu en premier sur Blog dbi services.

To delete specific line from file using oracle

Tom Kyte - Sat, 2016-08-27 11:46
Hi Tom, is there any way to delete the specific line from the file using oracle? Actually i have around 500 files, and i need to perform the same action on the all files. Is there any process?
Categories: DBA Blogs

Table join vs Rank()Over()

Tom Kyte - Sat, 2016-08-27 11:46
Hi!, recently I came with the task of getting the newest address from customers from an ORACLE JD Edwards database. In JDE, the table F0101 is the Address Book Master and table F0116 is Address by Date. I got two solutions: one using joins: SELE...
Categories: DBA Blogs

Index rebuild

Tom Kyte - Sat, 2016-08-27 11:46
Hi team, As i want to know that the view in oracle with the help of that we can check when the index is required to rebuild. like leaf node , deleted node something So, could you pls let me know the exact view or query with the help of that we can...
Categories: DBA Blogs

storing and indexing prefixed values

Tom Kyte - Sat, 2016-08-27 11:46
Hi Tom, We've got a table that'll hold about 1M rows over the course of its lifetime. We need to add a column that the business want to search for, so indexing it seems like a good idea. Column is - a 10-character string, - mostly unique (...
Categories: DBA Blogs

Difference between DBMS and UTL packages

Tom Kyte - Sat, 2016-08-27 11:46
Hello, There are around 13 UTL packages for accomplishing many purposes in Oracle database, and their package name starts as UTL_(some text). For eg., UTL_SMTP, UTL_TCP, etc. I was just wondering why we have the name of these packages as UTL_(s...
Categories: DBA Blogs

To CDB or NOT To CDB (ORACLE_MAINTAINED Column)

Michael Dinh - Sat, 2016-08-27 09:26

I recently discovered about column ORACLE_MAINTAINED added to DBA views and got excited over nothing.

ORACLE_MAINTAINED from DBA_ROLES
Denotes whether the role was created, and is maintained, by Oracle-supplied scripts (such as catalog.sql or catproc.sql).
A role for which this column has the value Y must not be changed in any way except by running an Oracle-supplied script.

It looks like ORACLE_MAINTAINED is only valid for CDB and seems as if Oracle is really wanting all databases to go to CDB.

For Non-CDB, ORACLE_MAINTAINED value is ‘Y’ even when not.

test:(MDINH@test):PRIMARY> select name,cdb from v$database;

NAME                           CDB
------------------------------ ---
test                            NO

1 row selected.

test:(MDINH@test):PRIMARY> select username,oracle_maintained from dba_users order by 1;

USERNAME             O
-------------------- -
ANONYMOUS            Y
APPQOSSYS            Y
AUDSYS               Y
DBSNMP               Y
DEMO                 Y
DIP                  Y
GSMADMIN_INTERNAL    Y
GSMCATUSER           Y
GSMUSER              Y
MDINH                Y
OJVMSYS              Y
ORACLE_OCM           Y
OUTLN                Y
SYS                  Y
SYSBACKUP            Y
SYSDG                Y
SYSKM                Y
SYSTEM               Y
XDB                  Y
XS$NULL              Y

20 rows selected.

test:(MDINH@test):PRIMARY> select role,oracle_maintained from dba_roles order by 1;

ROLE                           O
------------------------------ -
ADM_PARALLEL_EXECUTE_TASK      Y
AQ_ADMINISTRATOR_ROLE          Y
AQ_USER_ROLE                   Y
AUDIT_ADMIN                    Y
AUDIT_VIEWER                   Y
AUTHENTICATEDUSER              Y
CAPTURE_ADMIN                  Y
CDB_DBA                        Y
CONNECT                        Y
DATAPUMP_EXP_FULL_DATABASE     Y
DATAPUMP_IMP_FULL_DATABASE     Y
DBA                            Y
DBFS_ROLE                      Y
DELETE_CATALOG_ROLE            Y
EJBCLIENT                      Y
EM_EXPRESS_ALL                 Y
EM_EXPRESS_BASIC               Y
EXECUTE_CATALOG_ROLE           Y
EXP_FULL_DATABASE              Y
GATHER_SYSTEM_STATISTICS       Y
GDS_CATALOG_SELECT             Y
GLOBAL_AQ_USER_ROLE            Y
GSMADMIN_ROLE                  Y
GSMUSER_ROLE                   Y
GSM_POOLADMIN_ROLE             Y
HS_ADMIN_EXECUTE_ROLE          Y
HS_ADMIN_ROLE                  Y
HS_ADMIN_SELECT_ROLE           Y
IMP_FULL_DATABASE              Y
JAVADEBUGPRIV                  Y
JAVAIDPRIV                     Y
JAVASYSPRIV                    Y
JAVAUSERPRIV                   Y
JAVA_ADMIN                     Y
JAVA_DEPLOY                    Y
JMXSERVER                      Y
LOGSTDBY_ADMINISTRATOR         Y
OEM_ADVISOR                    Y
OEM_MONITOR                    Y
OPTIMIZER_PROCESSING_RATE      Y
PDB_DBA                        Y
PROVISIONER                    Y
RECOVERY_CATALOG_OWNER         Y
RECOVERY_CATALOG_USER          Y
RESOURCE                       Y
SCHEDULER_ADMIN                Y
SELECT_CATALOG_ROLE            Y
TEST                           Y
XDBADMIN                       Y
XDB_SET_INVOKER                Y
XDB_WEBSERVICES                Y
XDB_WEBSERVICES_OVER_HTTP      Y
XDB_WEBSERVICES_WITH_PUBLIC    Y
XS_CACHE_ADMIN                 Y
XS_NAMESPACE_ADMIN             Y
XS_RESOURCE                    Y
XS_SESSION_ADMIN               Y

57 rows selected.

test:(MDINH@test):PRIMARY>

++++++++++

SYS@tmnt> select username,oracle_maintained from dba_users order by 1;

USERNAME             O
-------------------- -
ANONYMOUS            Y
APPQOSSYS            Y
AUDSYS               Y
C##GGS_ADMIN         N
C##TESTING           N
DBSNMP               Y
DIP                  Y
GSMADMIN_INTERNAL    Y
GSMCATUSER           Y
GSMUSER              Y
ORACLE_OCM           Y
OUTLN                Y
SYS                  Y
SYSBACKUP            Y
SYSDG                Y
SYSKM                Y
SYSTEM               Y
XDB                  Y
XS$NULL              Y

19 rows selected.

SYS@tmnt> create role test;
create role test
            *
ERROR at line 1:
ORA-65096: invalid common user or role name

SYS@tmnt> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 APRIL                          READ WRITE NO
         4 DONNIE                         MOUNTED
SYS@tmnt> alter session set container=april;

Session altered.

SYS@tmnt> create role test;

Role created.

SYS@tmnt> select role,oracle_maintained from dba_roles order by 1;

ROLE                           O
------------------------------ -
ADM_PARALLEL_EXECUTE_TASK      Y
AQ_ADMINISTRATOR_ROLE          Y
AQ_USER_ROLE                   Y
AUDIT_ADMIN                    Y
AUDIT_VIEWER                   Y
AUTHENTICATEDUSER              Y
CAPTURE_ADMIN                  Y
CDB_DBA                        Y
CONNECT                        Y
DATAPUMP_EXP_FULL_DATABASE     Y
DATAPUMP_IMP_FULL_DATABASE     Y
DBA                            Y
DBFS_ROLE                      Y
DELETE_CATALOG_ROLE            Y
EM_EXPRESS_ALL                 Y
EM_EXPRESS_BASIC               Y
EXECUTE_CATALOG_ROLE           Y
EXP_FULL_DATABASE              Y
GATHER_SYSTEM_STATISTICS       Y
GDS_CATALOG_SELECT             Y
GLOBAL_AQ_USER_ROLE            Y
GSMADMIN_ROLE                  Y
GSMUSER_ROLE                   Y
GSM_POOLADMIN_ROLE             Y
HS_ADMIN_EXECUTE_ROLE          Y
HS_ADMIN_ROLE                  Y
HS_ADMIN_SELECT_ROLE           Y
IMP_FULL_DATABASE              Y
LOGSTDBY_ADMINISTRATOR         Y
OEM_ADVISOR                    Y
OEM_MONITOR                    Y
OPTIMIZER_PROCESSING_RATE      Y
PDB_DBA                        Y
PROVISIONER                    Y
RECOVERY_CATALOG_OWNER         Y
RECOVERY_CATALOG_USER          Y
RESOURCE                       Y
SCHEDULER_ADMIN                Y
SELECT_CATALOG_ROLE            Y
TEST                           N
XDBADMIN                       Y
XDB_SET_INVOKER                Y
XDB_WEBSERVICES                Y
XDB_WEBSERVICES_OVER_HTTP      Y
XDB_WEBSERVICES_WITH_PUBLIC    Y
XS_CACHE_ADMIN                 Y
XS_NAMESPACE_ADMIN             Y
XS_RESOURCE                    Y
XS_SESSION_ADMIN               Y

49 rows selected.

SYS@tmnt>

Isolation Level Serialization

Tom Kyte - Fri, 2016-08-26 17:26
Hi, We have two sessions: Setup Data: insert into table1 values(table1_seq.nextval, 'A', 'AA'); insert into table1 values(table1_seq.nextval, 'B', 'BB'); commit; Session 1: SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; select * from ta...
Categories: DBA Blogs

Conditionally join (left or inner) two table

Tom Kyte - Fri, 2016-08-26 17:26
Hi! First of all thanks for taking time to read and answer my question. So, I got two set of rows which needs to be joined. In some cases it should be outer left join, but in rest it should be inner. Is it possible make such query? Small ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator