Feed aggregator

Index Sanity

Jonathan Lewis - Tue, 2016-06-28 02:43

By popular demand (well, one person emailed me to ask for it) I’m going to publish the source code for a little demo I’ve been giving since the beginning of the millenium – it concerns indexes and the potential side effects that you can get when you drop an index that you’re not using. I think I’ve mentioned the effect several times in the history of this blog, but I can’t find an explicit piece of demo code, so here it is – starting at the conclusion – as a cut and paste from an SQL*Plus session running against an 11g instance:


SQL> set autotrace traceonly explain
select
        t1.small_vc, t2.small_vc, t3.small_vc
from
        t1, t2, t3
where
        t1.n1 between 40 and 50
and     t2.id1 = t1.id1
and     t2.ind_pad = t1.ind_pad
and     t2.id2 = t1.id2
and     t3.id = t1.id1
 11  ;

Execution Plan
----------------------------------------------------------
Plan hash value: 1184213596

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |   484 | 64856 |   227   (2)| 00:00:02 |
|*  1 |  HASH JOIN          |      |   484 | 64856 |   227   (2)| 00:00:02 |
|*  2 |   HASH JOIN         |      |   484 | 57596 |    14   (0)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| T2   |    20 |  1160 |     4   (0)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T1   |   484 | 29524 |    10   (0)| 00:00:01 |
|   5 |   TABLE ACCESS FULL | T3   |  5000 | 75000 |   213   (2)| 00:00:02 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T3"."ID"="T1"."ID1")
   2 - access("T2"."ID1"="T1"."ID1" AND "T2"."IND_PAD"="T1"."IND_PAD"
              AND "T2"."ID2"="T1"."ID2")
   4 - filter("T1"."N1"<=50 AND "T1"."N1">=40)

SQL> drop index t2_i1;

Index dropped.

select
        t1.small_vc, t2.small_vc, t3.small_vc
from
        t1, t2, t3
where
        t1.n1 between 40 and 50
and     t2.id1 = t1.id1
and     t2.ind_pad = t1.ind_pad
and     t2.id2 = t1.id2
and     t3.id = t1.id1
 11  ;

Execution Plan
----------------------------------------------------------
Plan hash value: 2290830436

--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |    52 |  6968 |    67   (2)| 00:00:01 |
|   1 |  NESTED LOOPS                |       |    52 |  6968 |    67   (2)| 00:00:01 |
|   2 |   NESTED LOOPS               |       |    52 |  6968 |    67   (2)| 00:00:01 |
|*  3 |    HASH JOIN                 |       |    52 |  6188 |    14   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL        | T2    |    20 |  1160 |     4   (0)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL        | T1    |   484 | 29524 |    10   (0)| 00:00:01 |
|*  6 |    INDEX UNIQUE SCAN         | T3_PK |     1 |       |     0   (0)| 00:00:01 |
|   7 |   TABLE ACCESS BY INDEX ROWID| T3    |     1 |    15 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T2"."ID1"="T1"."ID1" AND "T2"."IND_PAD"="T1"."IND_PAD" AND
              "T2"."ID2"="T1"."ID2")
   5 - filter("T1"."N1"<=50 AND "T1"."N1">=40)
   6 - access("T3"."ID"="T1"."ID1")

Starting from the top – I’ve enabled autotrace which, technically, could mean that the plans are not the ones I’d see at run-time, but you can take my word for it that in 11g they are the run-time plans; then I’ve supplied a query that produces a plan with 3 full tablescans, two hash joins, and no index usage at all.

You’ll notice at operation 3 of the plan that table t2 is very small – only 20 rows selected, with no predicates that could have filtered that result down from a large table (take my word for it the stats have just been collected) so, as the ancient mythology would have it, we don’t really need an index on that table (a quick check tells me that the index wasn’t there to enforce uniqueness). Immediately after the first execution plan you can see that I’ve dropped an index called t2_i1 – trust me that IS the index on table t2.

We “run” the original query again, it gets re-optimised (and there’s no question of cardinality feedback or any other feature coming into play) and we get a different plan.

Dropping, or adding, a multi-column index to a table could change execution plan – even if the index is not present in the plan.

The reason for this is the “index sanity check”. When the optimizer is doing its cardinality estimates, if it see equality conditions on the set of columns that make up an index it can use the distinct_keys statistic from the index in the calculation rather than using the standard calculation of multiplying together the num_distinct of the separate columns. In earlier versions of Oracle there were some restrictions about uniqueness, but the limitations were removed in 11.1.0.7.

In my case there were 10 distinct values for id1, just one value for ind_pad, and 20 distinct values for id2 – but a total of only 20 distinct values for the combination. With an index in place on the combination the optimizer used the value 20 in its calculation, in the absence of the index it used the value 200 – that factor of 10 led to a drop in the join cardinality estimate from 484 rows to 52 rows – at which point the optimizer calculations made the next step in the plan change from a hash join to a nested loop join.

If you want to reproduce the demo, here’s the full script – the data isn’t a realistic data set, and I’ve had to use various non-standard settings to make the script as repeatable as possible – I’ve built the data set in a tablespace using an 8KB block size, 1MB uniform extents and manual (freelist) segment space management.


rem
rem     Script:         index_sanity.sql
rem     Author:         Jonathan Lewis
rem

drop table t3;
drop table t2;
drop table t1;

execute dbms_random.seed(0);

begin   
        begin           execute immediate 'purge recyclebin';
        exception       when others then null;
        end; 

        begin
                dbms_stats.set_system_stats('MBRC',16);
                dbms_stats.set_system_stats('MREADTIM',10);
                dbms_stats.set_system_stats('SREADTIM',5);
                dbms_stats.set_system_stats('CPUSPEED',1000);
        exception
                when others then null;
        end;

end;
/

create table t1
as
select
        mod(rownum,10)          id1,
        mod(rownum,20)          id2,
        rpad('x',40,'x')        ind_pad,
        mod(rownum,100)         n1,
        lpad(rownum,10,'0')     small_vc,
        rpad('x',50)            padding
from
        all_objects
where
        rownum  <= 4000
;

create table t2 
pctfree 99
pctused 1
as
select
        mod(rownum,10)          id1,
        mod(rownum,20)          id2,
        rpad('x',40,'x')        ind_pad,
        mod(rownum,100)         n1, 
        lpad(rownum,10,'0')     small_vc,
        rpad('x',200)           padding
from
        all_objects
where
        rownum <= 20
;

create table t3
pctfree 95
pctused 1
as
select
        rownum          id,
        rpad(rownum,10) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 5000
;
begin
        dbms_stats.gather_table_stats(
                ownname => user,
                tabname => 'T1',
                method_opt => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname => user,
                tabname => 'T2',
                method_opt => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname => user,
                tabname => 'T3',
                method_opt => 'for all columns size 1'
        );

end;
/

create        index t1_i1 on t1(id1, ind_pad, id2) pctfree 91;
create        index t2_i1 on t2(id1, ind_pad, id2) pctfree 91;
alter table t3 add constraint t3_pk primary key (id);

set autotrace traceonly explain

select
        t1.small_vc, t2.small_vc, t3.small_vc
from
        t1, t2, t3
where
        t1.n1 between 40 and 50
and     t2.id1 = t1.id1
and     t2.ind_pad = t1.ind_pad
and     t2.id2 = t1.id2
and     t3.id = t1.id1
;

-- alter index t1_i1 invisible;
-- alter index t2_i1 invisible;

drop index t1_i1;
-- drop index t2_i1;

accept X prompt "Press return to coninue"

select
        t1.small_vc, t2.small_vc, t3.small_vc
from
        t1, t2, t3
where
        t1.n1 between 40 and 50
and     t2.id1 = t1.id1
and     t2.ind_pad = t1.ind_pad
and     t2.id2 = t1.id2
and     t3.id = t1.id1
;

set autotrace off

You’ll notice from the commented lines in the above that the effect appears whether you drop the index or make it invisible, also that there’s a similar index on the t1 table that matches the index on the t2 table – I could get the effect from dropping or making invisible either index.

There is a saving grace in 11g – if I do drop, or make invisible, one of these indexes I can protect myself against the statistical effect by create a column group on the same set of columns, and the num_distinct from the column group would serve the same purpose as the distinct_keys from the index.


database option - Spatial and Graphics

Pat Shuff - Tue, 2016-06-28 02:07
Today we are going to focus on the Spatial and Graphics option of the Oracle Database. Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence.

The Spatial home page is a good starting point to learn more about the technology. Books that cover this topic are

Note that most of these books are three years old or older. Spatial has not changed much between 11g and 12c so the older books are still relevant. The key to the Spatial component is being able to define objects using geospatial tags. To achieve this, Oracle extended the database with the SDO_GEOMETRY data type. This is used just like an INTEGER or CHAR declaration for a variable but it contains a latitude and longitude element to define where something is located. Some sample code that we can lift from the Pro Oracle Spatial book looks like
SQL> CREATE TABLE  us_restaurants_new
(
  id                    NUMBER,
  poi_name       VARCHAR2(32),
  location         SDO_GEOMETRY    -- New column to store locations
);
This creates a table that defines an entry that helps us find where the restaurant is located. We can populate this entry with
SQL> INSERT INTO  us_restaurants_new  VALUES
(
  1,
  'PIZZA HUT',
  SDO_GEOMETRY
  (
    2001,  -- SDO_GTYPE attribute: "2" in 2001 specifies dimensionality is 2.
    NULL,    -- other fields are set to NULL.
    SDO_POINT_TYPE  -- Specifies the coordinates of the point
    (
      -87,  -- first ordinate, i.e., value in longitude dimension
      38,  -- second ordinate, i.e., value in latitude dimension
      NULL  -- third ordinate, if any
    ),
    NULL,
    NULL
  )
);
This inserts and entry for restaurant number 1, labeled PIZZA_HUT, and the location is defined by a point located at -87, 38. Note that these are relative locations defined in relation to a map. We use the SDO_GTYPE to define what type of mapping that we are using and how we are describing the location for this store.

The key benefit to this is that we can define restaurants and things like interstates. We can query the database by asking for any reference that is half a mile from the interstate. This is done with the following query

SQL> SELECT poi_name
FROM
  (
    SELECT poi_name,
      SDO_GEOM.SDO_DISTANCE(P.location, I.geom, 0.5) distance
    FROM us_interstates  I, us_restaurants  P
    WHERE I.interstate = 'I795'
      ORDER BY distance
  )
WHERE ROWNUM <= 5;

POI_NAME
-----------------------------------
PIZZA BOLI'S
BLAIR MANSION INN DINNER THEATER
KFC
CHINA HUT
PIZZA HUT
The select statement does a distance calculation looking at the distance between the interstate labeled I795 and any restaurant in the database. Note that we could have selected an address on the interstate and found something that is less than a specified distance. This is typically how something like Google Maps works. It uses your current location which is read from your phone as a latitude, longitude, and elevation and shows you the search term close to you. This allows you to easily find banks, places to eat, places to get gas, or an address that you are trying to get to.

We can not only look for distances relative to a point or a line (as we did with the interstate) but we can draw shapes around an object and look for things that fall into or out of the shape. For example, if we get the GSP points for a park, we can draw a shape that defines the park using latitude and longitude points. We can then look for related objects inside the park, outside the park, or within a few feet of the park. This helps police look for crimes that happen surrounding a park and react appropriately. In the database we define an object with spatial data and draw a shape around the object. A simple way of doing this is a simple box. The code to do this would look like

SQL> INSERT INTO  USER_SDO_GEOM_METADATA  VALUES
(
  'CUSTOMERS',      -- TABLE_NAME
  'LOCATION',       -- COLUMN_NAME
  SDO_DIM_ARRAY     -- DIMINFO attribute for storing dimension bounds, tolerance
  (
    SDO_DIM_ELEMENT
    (
      'LONGITUDE',  -- DIMENSION NAME for first dimension
      -180,         -- SDO_LB for the dimension
      180,          -- SDO_UB for the dimension
      0.5           -- Tolerance of 0.5 meters
    ),
    SDO_DIM_ELEMENT
    (
      'LATITUDE',   -- DIMENSION NAME for second dimension
      -90,          -- SDO_LB for the dimension
      90,           -- SDO_UB for the dimension
      0.5           -- Tolerance of 0.5 meters
    )
  ),
  8307              -- SRID value for specifying a geodetic coordinate system
);
You can define a data type as
  • Point
  • Line string
  • Polygon
  • Polygon with a hole
  • Collection (a combination of all of the above)
  • Compound line string
  • Compound polygon
  • 3d Composite surface
  • 3d Simple solid
  • 3d Compound solid
  • 3d Collection (a combination of all 3d objects)
When you define a spatial object it uses the SDO_GEOMETRY structure. This structure contains an SDO_GTYPE that defines if the object is 2d or 3d as well as the data type (0 = Uninterpreted type, 1 = Point, 5 = Multipoint, 2 = Line, 6 = Multiline, 3 = Polygon/surface, 7 = Multipolygon/multisurface, 4 = Collection, 8 = Solid, 9 = Multisolid). An entry of 2001 would be a 2d object, designated by the 2, that is a single points, designated by the 1. If this entry were 2002 it would be a 2d object that is a series of points to create a line, designated by the second 2. The SDO_SRID defines specifies the spatial reference system, or coordinate system, for the geometry. We can have a relative coordinate system or use latitude and longitude for coordinates. The SDO_POINT attribute specifies the location of a point geometry, such as the location of a customer. This gives us a reference point to work from and the rest of the data is the relative information based on the SDO_SRID. For example, we can draw a polygon defining a park starting at the northwest corner of the park. The SDO_POINT will provide the northwest corner of the park. The SDO_ELEM_INFO and SDO_ORDINATES attributes describe the polygon around the park. For more detailed examples, look at Chapter 3 of Pro Oracle Spatial for Oracle Database 11g.

We are not going to go into deep detail on how to program Spatial. It is recommended that you look at

It is important to note that Spatial is an optional package that runs on the Enterprise Edition database. If you are going to run this in the cloud you need to use the High Performance Edition or Extreme Performance Edition. If you are going to run this on IaaS you need to purchase the option on top of your database license and the processor metrics need to match. For example, if you run on a 2 virtual core system in the cloud, you need a 2 virtual core license for Spatial as well. You can not run Spatial on Amazon RDS because they disable this feature.

In summary, Spatial and Graphics are optional packages that help you do locational queries against a database. Spatial is not unique to Oracle but the structures and select statements typically do not cross database types but does work with products like Golden Gate to replicate data to other database spatial structures and queries. Spatial is a very powerful package that simplifies select statements that would be very complex otherwise. Finding distance between objects or distances from a line (highway for example) or distances from a polygon (park for example). If your application needs Spatial you need to select the High Performance or Extreme Performance editions.

Links for 2016-06-27 [del.icio.us]

Categories: DBA Blogs

Out for a while

Anthony Shorten - Tue, 2016-06-28 00:31
Due to some medical issues I will not be posting till September this year. Thank you for your patience.

Pella Showcases Success with Oracle Service Cloud at Call Center Week 2016

Linda Fishman Hoyle - Mon, 2016-06-27 15:40

The Contact Center Week Conference and Expo is going on in Las Vegas this week. The Oracle Service Cloud team is demonstrating how its modern service solutions help organizations deliver exceptional customer experiences. They are also hosting a session with Pella Corporation, which will showcase success with Oracle Service Cloud. Pella is a leader in technology and product innovation as a window and door manufacturer.

Pella will share how Oracle Service Cloud has helped its contact center agents and field technicians deliver consistent, high-quality customer experiences across channels including web, phone, chat, social media and during in-person appointments.  According to Rick Hassman, CIO, Pella, “Having these capabilities provides an experience that our competition can’t deliver.”

You can read more about Pella’s story in this Forbes OracleVoice article.

datapump export using DBMS_DATAPUMP

Tom Kyte - Mon, 2016-06-27 11:06
Good afternoon, it is a beginning programmer. I had a problem with DBMS_DATAPUMP.data_filter. Data_filter not work. create PROCEDURE exp_tables_w_qfilter ( p_schema_name IN VARCHAR2, -- 'LTP' p_table_name IN VARCHAR2, -- 'AUDITTABLE_TEST' p_t...
Categories: DBA Blogs

Transpose Rows into Columns

Tom Kyte - Mon, 2016-06-27 11:06
I have a table like: Client Process Date Status A B 21-june Y C D 22-june N A B 22-june N...
Categories: DBA Blogs

2 highest salary department wise ..alternate method

Tom Kyte - Mon, 2016-06-27 11:06
<b>select * from employees e1 where :parameter=(select count(distinct e2.salary) from employees e2 where e1.salary<=e2.salary);</b> --<i>hi can anyone explain me this query .i know this can be solve using rank,dense_rank but confused while ...
Categories: DBA Blogs

Services -- 4 : Using the SERVICE_NAMES parameter (non-RAC, PDB)

Hemant K Chitale - Mon, 2016-06-27 09:55
In my previous blog post, I have demonstrated using DBMS_SERVICE to create and start services.

But there is another way.  Using the SERVICE_NAMES parameter.  Going by most google references, SERVICE_NAME seems to be more popular than DBMS_SERVICE.

Can it be used in a Pluggable Database ?  Let' try.

[oracle@ora12102 Desktop]$ . oraenv
ORACLE_SID = [oracle] ? CDB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12102 Desktop]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 27 22:39:59 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 973082144 bytes
Database Buffers 654311424 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:43:36

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=14129))
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


SQL> connect system/oracle@PDB1
Connected.
SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> alter system set service_names='NEW_SVC_1,NEW_SVC_2';
alter system set service_names='NEW_SVC_1,NEW_SVC_2'
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database


SQL> !oerr ora 65040
65040, 00000, "operation not allowed from within a pluggable database"
// *Cause: An operation was attempted that can only be performed in the root
// container.
// *Action: Switch to the root container to perform the operation.
//

SQL>


So, apparently, ORA-65040 means that we can't define SERVICE_NAMES for a PDB.

SQL> connect sys/oracle as sysdba
Connected.
SQL> show con_id

CON_ID
------------------------------
1
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> alter system set service_names='NEW_SVC_1,NEW_SVC_2';

System altered.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:53:56

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=14129))
Service "NEW_SVC_1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_SVC_2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ tnsping NEW_SVC_1

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:54:57

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_SVC_1)))
OK (30 msec)
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@new_svc_1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 27 22:55:04 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Enter user-name: system/oracle@new_svc_1
Last Successful login time: Mon Jun 27 2016 22:44:13 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
1
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL>


The new Services are actually running in the ROOT, not in the PDB .  Unlike my earlier blog post where I used the new services to connect to the PDB.

Therefore, for a PDB environment, I'd use DBMS_SERVICE as I've demonstrated here and here.
.
.
.



Categories: DBA Blogs

Digital Engagement: What Every Business Needs to Know

WebCenter Team - Mon, 2016-06-27 08:13

The post below, originally authored by Martin Cookson, Director of Mobility at Oracle EMEA and posted on the Oracle EMEA blog, does a great job at describing the challenges organizations are facing when it comes to digital experience and engagement, and what you can do to ensure your digital engagement strategy is successful. Martin's blog post also sets the stage nicely for topics we'll be discussing in the coming weeks and months around how you can use digital experience solutions in the cloud to modernize employee engagement, customer experience and applications infrastructure. We've got a lot of great use cases around these topics we can't wait to share with you, so please stay tuned!

Author: Martin Cookson, Director of Mobility at Oracle EMEA

Digital is throwing up new challenges and opportunities for businesses. On the one hand companies face challenges from disruptive ‘digitally native’ competitors, rapidly taking significant market share, or worse: redefining the traditional market entirely. Take that now classic example of Amazon, which revolutionized online retail with 1-Click Ordering and has subsequently launched Amazon Web Services. On the other hand, companies are finding that they can create significant new revenue streams and enter new markets through the use of digital technologies within their traditional businesses. Chemicals company Monsanto, for example, has branched out into the delivery of data science technologies for the agriculture sector.

The digital revolution is also an opportunity for any business to transform its own operations and in doing so, find new markets, develop new business models, and engage with consumers in innovative ways. To do this, however, companies must understand the nature of digital engagement and the possibilities that it affords.

Digital engagement in action

Digital engagement is of course driven by leveraging technology but success is about ensuring positive outcomes. It encompasses creating new revenue streams, seeking out new business opportunities, finding a competitive edge, creating efficiencies, and better serving customers.

One great example of an established business benefiting from digital engagement is Floyd Medical Center in the US, which is using its new web experience management platform to improve the way it communicates. Floyd Medical Centre today uses its web presence as an important community resource and marketing tool as it faces up to the challenge of increased competition in the region.

The healthcare system also includes an intranet that provides important information and resources for employees across the organization, as well as for departmental teams. Employees can now securely access the intranet, which is updated with new content daily, either in the office or remotely; while improved search and segmentation capabilities enable employees to find internal content more quickly. The result is a more effective workforce better able to service customers and deliver a competitive edge.

One of Europe’s largest and busiest airports, meanwhile, is using beacon technology to track travellers across its premises. Not only can the airport know exactly where an individual is at any time, but it can use the data it gathers to have new interactions with people at every stage of their journey. This enables it to target people with tailored information, whether it’s marketing messages in duty free, hotel and taxi recommendations at arrivals, or relevant advice at check-in or immigration.

Finally, News Limited in Australia has overhauled its publication systems and processes to remain competitive in the digital age; particularly to provide content designed for mobile devices and to introduce new services, such as paywalls for premium content. Its new digital content management platform allows the company to publish stories in less than 90 seconds, compared to 15 minutes in the past, allowing the company to stay on top of breaking news in real-time. News Limited has also added new features, such as story and gallery wizards and intelligent, internal search and automated editorial-content enhancement functions that have increased the quality of content and enabled greater customer satisfaction and higher subscription numbers.

Intelligent, in-the-moment customer experiences

So what are these businesses doing that sets them apart from their competitors? Or to put this question another way: what can businesses do to ensure their digital engagement strategies are successful?

First, businesses must adopt a mobile first strategy and engineer their services from the outside in (i.e. think with mobile devices and end user needs in mind). Businesses should then look to create value and loyalty while reducing the cost of operations by building intelligent, in-the-moment experiences across different channels simultaneously, in-context and collaboratively. This is all about knowing the user; not just about who they are and what they like etc., but what they are doing now; where they are and who and what they are near.

This is why the above-mentioned airport is enjoying such success from its beacon technology – it enables a service that is all about the customer at a precise moment in time, it meets a need, in context and through the perfect engagement channel. This is what digital engagement is all about – exceptional customer experiences through intuitive services that build customer engagement and loyalty.

Enabling digital engagement

Any business serious about creating disruptive digital engagement needs to adopt a mature digital engagement platform. The agile nature of digital engagement means that any such platform must be based on cloud computing technology. This is because only the cloud can deliver the flexibility, integration capabilities and speed businesses need to innovate and take engagement models to market before their competitors.

The cloud is also where many of the services digital engagement platforms leverage are based – such as Facebook, Google Maps and Apple Notifications and Twitter – and it makes sense to build the platform as close to these services as possible. Finally, the cloud delivers the cost-effective business models and appropriate operations models (for example, DevOps), required to make digital engagement a success.

Today’s cloud for tomorrow’s needs

However, cloud platforms are not all created equal. As businesses look to enable digital engagement, they must select cloud platforms capable of accelerating their digital transformation, integrating core business systems with modern engagement and delivery platforms, and giving them a competitive edge. This requires a holistic digital engagement platform comprising middleware and database software that delivers against the core areas of digital engagement, such as mobile, the Internet of Things, content and collaboration, digital web experience and business process agility. These are all elements that tie directly to building and deploying transformational omni-channel customer experiences.

It is crucial to point out that the right digital engagement platform is not simply an enabling technology - it is a strategic engine of innovation and absolutely essential to creating compelling customer services. The right cloud platform provides application developers within businesses with the ability to test and develop new and innovative customer services in low-cost, intuitive cloud environments, using pre-configured development tools. This approach dramatically reduces the cost and time associated with innovation.

Moreover, a hybrid cloud approach, which brings together both public and private cloud elements in combination with on-premises systems, enable businesses to rapidly scale up applications that have been proven to be successful within the test environment. It is an ideal approach for businesses that due to regulatory or policy reasons have not been able to move to the public cloud as fast as they would like. In today’s environment all businesses need to think of themselves as start-ups and look to deploy cloud platforms that help them innovate at speed and low cost – whether that is public, private or hybrid solutions.

No-one has a natural right to the digital future. Businesses will have to fight hard, be brave and experiment. The rewards are there to be won, but those who have no stomach digital transformation will find themselves in a fight for survival.

PeopleSoft Database Secure Baseline Configuration

PeopleSoft, similar to other major ERP applications, while depending on a database to store information, arguably does not secure the supporting database. The security of the database is the client’s responsibility.

In order to give a few examples of what we are talking about when we refer to database security, the following are several of the 200+ database security checks that Integrigy performs during our PeopleSoft security configuration assessments - take a look today at your database for a few quick checks:

  • Limit direct database access whenever possible. This is always our number one recommendation – how isolated is your database?
  • Database CPU patching – have you applied the latest database CPU patches?
  • Logging and auditing – do you have auditing enabled? How much? What monitoring tools and processes do you have?
  • Database passwords – especially key accounts such as the Connect Id, Access Id, IB and PS – are they set to weak or default passwords? Are you using profiles?
  • Permissions and authorizations – when was the last time you reviewed them? How many users have SELECT ANY TABLE privileges?
  • Ensure the Default tablespace should never be ‘SYSTEM’ or PSDEFAULT for named users. These should be reserved for the Oracle RDBMS and application respectively
  • Do not use SYSADM for day-to-day support. Use named accounts instead, are you?

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Database Security

PeopleSoft Security Quick Reference

Oracle Database, Oracle PeopleSoft, Auditor
Categories: APPS Blogs, Security Blogs

Fedora 24 and Oracle 12c

Tim Hall - Mon, 2016-06-27 05:40

fedoraFedora 24 was released a few days ago. As usual I had a play around with it and tried installing Oracle on it.

First comes the warnings, because people keep misunderstand why I do this. Do not install Oracle on Fedora before reading this!

With that out of the way, here’s what I did.

No real drama here. It was pretty much the same as Fedora 23 in that respect.

It’s kind-of hard to get excited about a new version of Fedora since I switched my desktop from Fedora to Mac. One thing that was interesting is the change to the upgrade process. In previous releases I used “fedup” to do it. Now it’s pretty much done using DNF (YUM). If you are interested, you can read about it here.

Cheers

Tim…

Fedora 24 and Oracle 12c was first posted on June 27, 2016 at 11:40 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Safari Books Diversion

Pat Shuff - Mon, 2016-06-27 02:07
Today I am going to step back and look at something relatively simple but very powerful. One thing that my company provides for employees is a subscription to technology books online. I tend to reference these books in my blog entries mainly because I find them to be good reference material. When I write a blog entry I try to first address the topic from the mindset of a sales rep. I address the simple questions around what is the technology, why is it relevant, and how much will it cost me. If possible the discussion also blends in a compare and contrast with another solution from another vendor. The second post is a technology how to targeted at the pre sales engineer or consultant. To dive deeper typically I use books, whitepapers, hands on tutorials, and demos to pull material from. Safari Books OnLine is my virtual library. Years ago I would go to Barnes and Noble, Fry's, or Bookstop and purchase a book. Safari Pricing starts at $400/year for individuals or teams and is flexible for corporations. If you break this down this means that you need to read about 8-10 books a year to break even. If you read fewer than that, purchase them from Amazon. If you read more than that or just want to read a chapter or two, subscribe to Safari.

Two of the features that I like about this interface is the search engine and the index engine. With the search engine, it looks inside of books and allows you to sort by relevance, date, and allows you to search inside a search. For example, if I do a search for jumpstart I get 3070 references. If I add solaris to the search I get 101 results. Note on the left there are three books written in 2016 and two books written in 2015. We can narrow our search and look for recent books that talk about jumpstart technology provided with Solaris. True, this might not be a relevant topic to you but it is an example of how to find a difficult to find topic in your virtual library.

We can add this search index to our favorites by clicking on the Add to Favorites button and selecting a topic list to add to. In this example we add a JumpStart book from 2001 to our Solaris book list.

We can look at more relevant publications and find something related to Solaris 11.2. We see the relevant information in the search index and when we click on the book it takes us to the relevant chapter. Note the highlighted text from our search. I find this is a good way of researching a topic that I need to learn more about or finding tutorials or examples of how to do something.

One of the nice things about search indexes or lists is that you can share this list with other people and look at other peoples lists. This is done by looking at your Favorites and Folders you can look at the topics that interest you with the books you have saved on that effective shelf.

One of the nice things is that you can look at shelves of other users. If you click on Shared List and search for your shelf title, you get a list of other users shelves. In this example we searched for Solaris and got five shelves that other users are maintaining.

We can subscribe to these shelves and add it to our favorites. This is done by clicking on the Following "+" sign. It adds the shelf to your Favorites list on the left. Note that we are following the "Solaris stuff" folder.

We can also add this as an RSS feed to our mail reader and get updates when the shelf is updated. We can then copy the rss feed html and add it to our news reader or Thunderbird email interface.

If we add this to our Thunderbird reader we get an email interface showing updates and new books added to the shelf. We don't need to go check the list on a regular basis but look at the newsfeed section of our mail browser

I hope this simple diversion was a good break from our dive into DBaaS and PaaS. Being able to do more than just a simple Google search is typically required to find examples and tutorials. Books historically have been a good place to find this and having access to not only my virtual bookshelf but other people's bookshelves where they sort and index things is a good thing. The $400 cost might be a bit prohibitive but the freedom is a good thing. Given that my company provides this subscription at no cost to me, I will continue to use this and read technology books on an airplane in offline mode and search as I am creating blog entries.

Complaining about the weather

FeuerThoughts - Sun, 2016-06-26 09:16
Seems like I hear people complaining about the weather a lot.

Too hot, too cold, too wet, too dry....

Seems to me that we should never complain about the weather. I refuse to complain about the weather. Why would I take this position?

So far there is just one planet that we know about in the universe that supports the kind of life we are: organic, carbon-based life.

"Weather" cannot be separated from this planet. So when you complain about the weather, you are complaining about the only place in the universe humans can even possibly, remotely live. 

Seems a bit mean spirited, from that perspective, to whine about rain (which is needed badly for us and trees and lots of other living things to survive).

Beyond all that, humans have spread across the entire planet, even (and often) to places that are hostile to human survival (places, in other words, that we did not evolve to live in).

In order to live in many of these places, we destroy chunks of those places to make them more hospitable, comfortable and convenient for us.

So it seems to me that when someone complains about the weather, we should ask:

Do you live in a location on Planet Earth that does not require the establishment of a "human survival zone"?

Indicators of Human Survival Zones: air conditioning so your brain doesn't fry; heat so that you don't turn into an ice cube; homes that seal you off completely from your surroundings....

If yes, then our response should be:
Wow! Aren't you lucky? You can breathe, you can drink water, you can eat the food, you can enjoy the natural environment with minimal degradation of that environment, and without dying. Why would you ever complain about the weather?
If no, the our response should be: 
You have no right to complain. You shouldn't even be here. You can only be here by radically changing (usually by destroying) the world around you. Which, by the way, affects the weather. If you don't like it here, then leave. But don't complain.And if none of that seems to be making a dent, you can always fall back on the Rainbow Argument:
How can you complain about the weather (and by extention a planet) that gives you rainbows? 
Categories: Development

Partner Webcast – Oracle Data Visualization Wins against Competition

How do we turn data into insights? Can your business users create their own analyses and present these as dynamic visualizations to their colleagues? With Oracle Data Visualization Desktop you can...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle BI Publisher 12.2.1.1 released !!

Tim Dexter - Sat, 2016-06-25 07:06

Oracle BI Publisher 12.2.1.1.0 has been released this week. The links to download files, documentation and release notes are available from BI Publisher OTN home page. The download is also available from Oracle Software Delivery Cloud.

The new features in this release are primarily driven by the integrated Cloud Applications and Platform Services. Data Security, Self-Service, Robustness, Easier Integration and Cloud based Data & Delivery has been the main focus here. Check the new features guide available in the BI Publisher OTN home page for a quick glance at these new features.

Upgrading to Oracle Business Intelligence from 12.2.1.0 to 12.2.1.1 is an in-place upgrade performed by Upgrade Assistant.

Migration of Oracle Business Intelligence from 11g to 12.2.1.1 is an out-of-place upgrade similar to 12.2.1.0 release, but now you do not need to separately migrate BI Publisher configuration as a post migration step and you can use Baseline Validation Tool to verify the upgraded BI Publisher reports. 

Stay tuned for more information on the new features, upgrade and migration.

Have a nice day !

Categories: BI & Warehousing

Oracle External Table ODCIEXTTABLEOPEN error while selecting count(*) of the table

Tom Kyte - Sat, 2016-06-25 04:06
Hi I am explaining from beginning, My Server is : UNIX BOX (HPSA) I have created the below folder in this UNIX server and folder permissions are mm13pb:/ $ ls -ltr drwxr-xr-x 7 root root 4096 Dec 15 2013 data mm13p...
Categories: DBA Blogs

Unused Index(s) for a specific Time?

Tom Kyte - Sat, 2016-06-25 04:06
<code>Hi Tom, I want to drop those indexes which are not used in the last 2 months so to reclaim space. For example There are two indexes on scott.emp on the empno and the other is on deptno. While the application is only using empno index but no...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator