Feed aggregator

SQL Server 2016 – Query Store: retrieve query that doesn’t appear!

Yann Neuhaus - 13 hours 45 min ago

For our event SQL Server 2016 in September, I am studying the new functionality Query Store.
My colleague David Barbarin have written few months ago about Query Store and how it’s working.

Simple SELECT Query

To begin, I execute a simple SELECT on a table with 3 different methods:

SELECT * FROM QS_test;
exec sp_executesql N'SELECT * FROM QS_test'
EXEC sp_GetQS_test;

The “sp_GetQS_test” is a stored procedure with the select statement.
I created a little query with specific DMVs for Query Store to analyze the query:

SELECT  qsqt.query_text_id,qsqt.query_sql_text,qsq.query_id,qsq.query_hash,
   qsq.query_parameterization_type_desc,qsq.initial_compile_start_time,
   qsq.last_compile_start_time,qsq.last_execution_time,qsq.avg_compile_duration,
   qsp.query_id,qsp.plan_id,qsrs.execution_type_desc
  FROM sys.query_store_query_text qsqt
  inner join sys.query_store_query qsq on qsq.query_text_id=qsqt.query_text_id
  inner join sys.query_store_plan qsp on qsp.query_id=qsq.query_id 
  inner join sys.query_store_runtime_stats qsrs on qsrs.plan_id=qsp.plan_id   
  WHERE query_sql_text='SELECT * FROM QS_test';

QS_01
As you can see in the result, all 3 queries are present with the same query text (‘SELECT * FROM QS_test’) in the query store.

SELECT Query with a Where clause

I continue my test with a select and a where clause:

  SELECT * FROM QS_test WHERE rid=5

I run my query to find the query in the query store:
QS_02
And, Oh surprise, no query found! The query does not appear in the query store….
I rerun my query without the where clause to see if I find something:
QS_03
The result give me a query written differently:
(@1 tinyint)SELECT * FROM [QS_test] WHERE [rid]=@1

This query goes through a parametrization and to retrieve this information we use a new function in SQL Server 2016: fn_stmt_sql_handle_from_sql_stmt

Function sys.fn_stmt_sql_handle_from_sql_stmt

This function give us the SQL handle for the query
QS_04
After, I add the function in my query to find it in the Query Store:

SELECT  qsqt.query_text_id,qsqt.query_sql_text,qsq.query_id,qsq.query_hash,

qsq.query_parameterization_type_desc,qsq.initial_compile_start_time,qsq.last_compile_start_time,

qsq.last_execution_time,qsq.avg_compile_duration,qsp.query_id,qsp.plan_id,qsrs.execution_type_desc

FROM sys.query_store_query_text qsqt

inner join sys.query_store_query qsq on qsq.query_text_id=qsqt.query_text_id

inner join sys.query_store_plan qsp on qsp.query_id=qsq.query_id

inner join sys.query_store_runtime_stats qsrs on qsrs.plan_id=qsp.plan_id

CROSS APPLY sys.fn_stmt_sql_handle_from_sql_stmt('SELECT * FROM QS_test WHERE rid=5',NULL) fsshfss

WHERE qsqt.statement_sql_handle=fsshfss.statement_sql_handle;

QS_05

It’s done, I retrieve the query thanks to this new function.
You can notice that I use the statement_sql_handle column and not the query_sql_text column in the clause where.

I have tested with query_sql_text column and you can see here the error that I get…
QS_06
The query_sql_text from the function is SQL_Latin1_General_CP1_CI_AS and both my database and my instance are using French_CI_AS. This is not hopeless…
Then, if you want to use the query with query_sql_text, you just need just to precise the collation with the keyword COLLATE
QS_07

 

Cet article SQL Server 2016 – Query Store: retrieve query that doesn’t appear! est apparu en premier sur Blog dbi services.

How are you?

FeuerThoughts - 14 hours 45 min ago
I get this question a lot.

You probably do, too.

Sometimes nothing more is meant by it than "Hello."

Sometimes they really mean it, they really want to know.

Generally, my answer is "Great!"

'Cause no matter what relatively small irritations I have in my life, the bottom line is that my life is quite wonderful.

But I've decided that perhaps I should not simply say "Great!" I should explain why my life is so great.

So here goes:

Great!

I am an organic life form living on the only planet we know of in the entire universe that supports organic life. How wonderful is that?

Plus, I am self-aware, so I know that I am alive and can appreciate rainbows and the sound of wind moving through trees, and so on.* How incredible is that?

And, best of all, I have two smart, beautiful, hilarious, stubborn and funny granddaughters.**

So how am I?

GREAT!

* Note: just in case that sentence sounds as though I am celebrating the uniqueness of human beings, I must clarify that I happen to believe that many, many living creatures from thousands of species are self-aware and appreciate the world around them. Birds, spiders, squirrels, "etc" ....

** Photos are, of course, required to back this up:

Loey Lucille Silva


Juna Josephine Silva



Categories: Development

Different between FGA and virtual private database?

Tom Kyte - 15 hours 25 min ago
I am very confused with some terms such as difference between Fine Grained Auditing and virtual private database?
Categories: DBA Blogs

UTL EMAIL (HTML)

Tom Kyte - 15 hours 25 min ago
Hi Tom, I have issue with my UTL Email, when i am sendint the email the text matter comming in same line: please check comments in the code where the issue: create or replace PROCEDURE ...
Categories: DBA Blogs

Split a int value in multiple rows based on percentage and in multiple of another number

Tom Kyte - 15 hours 25 min ago
Hi Tom, I have situation where i have to divide a integer in multiple rows depends on the %value given and the rounding specified. The value in ASSIGNMENT table needs to be divided in multiple of ROUNDING depends on the CONSIGNMENT records. In...
Categories: DBA Blogs

Optimal number of partitions

Tom Kyte - 15 hours 25 min ago
Tom, I have this table with around one million records. The table has the potential to grow to 2 million, 3 million or even more... I am planning on partitioning this table using hash partitioning on the most queried column. A few questions: ...
Categories: DBA Blogs

Adaptive Cursor Sharing

Tom Kyte - 15 hours 25 min ago
Hi Tom, I am aware, you have already clarified about the same subject many times. I have a table, which stores customer transaction information. Data load is happening continuously into the table. Since it is a transaction table, given a day,...
Categories: DBA Blogs

Oracle 10g upgrade

Tom Kyte - 15 hours 25 min ago
<code>Tom You mentioned that the AskTom site was recently upgraded to 10g. I take it therefore that you consider it to be of sufficient quality to run production code, even though the officially released version is only a couple of months old ? ...
Categories: DBA Blogs

Trace Flag 4199 or not in SQL Server 2016?

Yann Neuhaus - 16 hours 37 min ago

Some Trace Flag like T1117 and T1118 are no more needed as you can read on David’s blog.
But that’s not all, you have also the T4199…

What’ does the Trace Flag T4199?

This Trace Flag enables all query optimizer fixes.
A lot of DBAs have enabled this Trace Flag globally during the build of a new server.
If you want to take advantage of an enhancement or a fix, the T4199 becomes a necessity…

But now, with SQL Server 2016, this is an old story!

SQL Server 2016 eliminates the T4199

In SQL Server 2016, you don’t need to enable this Trace Flag, forget your old habit!
It is automatically included when you change the COMPATIBILY LEVEL to 130:

USE [master]
GO
ALTER DATABASE [MyDatabase] SET COMPATIBILITY_LEVEL = 130
GO

Advise: If you find a unexpected/poor plan, use the Query Store to analyze and force a plan!
More information on the KB974006

 

Cet article Trace Flag 4199 or not in SQL Server 2016? est apparu en premier sur Blog dbi services.

Swiss PGDAY 2016, Slides are online

Yann Neuhaus - 17 hours 56 min ago

Last Friday the Swiss PGDAY happened in Rapperswil. All the slides as well as some pictures are now availble (tab “Programm”):

>pgday_logo

 

Cet article Swiss PGDAY 2016, Slides are online est apparu en premier sur Blog dbi services.

Index Sanity

Jonathan Lewis - 18 hours 28 min ago

By popular demand (well, one person emailed me to ask for it) I’m going to publish the source code for a little demo I’ve been giving since the beginning of the millenium – it concerns indexes and the potential side effects that you can get when you drop an index that you’re not using. I think I’ve mentioned the effect several times in the history of this blog, but I can’t find an explicit piece of demo code, so here it is – starting at the conclusion – as a cut and paste from an SQL*Plus session running against an 11g instance:


SQL> set autotrace traceonly explain
select
        t1.small_vc, t2.small_vc, t3.small_vc
from
        t1, t2, t3
where
        t1.n1 between 40 and 50
and     t2.id1 = t1.id1
and     t2.ind_pad = t1.ind_pad
and     t2.id2 = t1.id2
and     t3.id = t1.id1
 11  ;

Execution Plan
----------------------------------------------------------
Plan hash value: 1184213596

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |   484 | 64856 |   227   (2)| 00:00:02 |
|*  1 |  HASH JOIN          |      |   484 | 64856 |   227   (2)| 00:00:02 |
|*  2 |   HASH JOIN         |      |   484 | 57596 |    14   (0)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| T2   |    20 |  1160 |     4   (0)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T1   |   484 | 29524 |    10   (0)| 00:00:01 |
|   5 |   TABLE ACCESS FULL | T3   |  5000 | 75000 |   213   (2)| 00:00:02 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T3"."ID"="T1"."ID1")
   2 - access("T2"."ID1"="T1"."ID1" AND "T2"."IND_PAD"="T1"."IND_PAD"
              AND "T2"."ID2"="T1"."ID2")
   4 - filter("T1"."N1"<=50 AND "T1"."N1">=40)

SQL> drop index t2_i1;

Index dropped.

select
        t1.small_vc, t2.small_vc, t3.small_vc
from
        t1, t2, t3
where
        t1.n1 between 40 and 50
and     t2.id1 = t1.id1
and     t2.ind_pad = t1.ind_pad
and     t2.id2 = t1.id2
and     t3.id = t1.id1
 11  ;

Execution Plan
----------------------------------------------------------
Plan hash value: 2290830436

--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |    52 |  6968 |    67   (2)| 00:00:01 |
|   1 |  NESTED LOOPS                |       |    52 |  6968 |    67   (2)| 00:00:01 |
|   2 |   NESTED LOOPS               |       |    52 |  6968 |    67   (2)| 00:00:01 |
|*  3 |    HASH JOIN                 |       |    52 |  6188 |    14   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL        | T2    |    20 |  1160 |     4   (0)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL        | T1    |   484 | 29524 |    10   (0)| 00:00:01 |
|*  6 |    INDEX UNIQUE SCAN         | T3_PK |     1 |       |     0   (0)| 00:00:01 |
|   7 |   TABLE ACCESS BY INDEX ROWID| T3    |     1 |    15 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T2"."ID1"="T1"."ID1" AND "T2"."IND_PAD"="T1"."IND_PAD" AND
              "T2"."ID2"="T1"."ID2")
   5 - filter("T1"."N1"<=50 AND "T1"."N1">=40)
   6 - access("T3"."ID"="T1"."ID1")

Starting from the top – I’ve enabled autotrace which, technically, could mean that the plans are not the ones I’d see at run-time, but you can take my word for it that in 11g they are the run-time plans; then I’ve supplied a query that produces a plan with 3 full tablescans, two hash joins, and no index usage at all.

You’ll notice at operation 3 of the plan that table t2 is very small – only 20 rows selected, with no predicates that could have filtered that result down from a large table (take my word for it the stats have just been collected) so, as the ancient mythology would have it, we don’t really need an index on that table (a quick check tells me that the index wasn’t there to enforce uniqueness). Immediately after the first execution plan you can see that I’ve dropped an index called t2_i1 – trust me that IS the index on table t2.

We “run” the original query again, it gets re-optimised (and there’s no question of cardinality feedback or any other feature coming into play) and we get a different plan.

Dropping, or adding, a multi-column index to a table could change execution plan – even if the index is not present in the plan.

The reason for this is the “index sanity check”. When the optimizer is doing its cardinality estimates, if it see equality conditions on the set of columns that make up an index it can use the distinct_keys statistic from the index in the calculation rather than using the standard calculation of multiplying together the num_distinct of the separate columns. In earlier versions of Oracle there were some restrictions about uniqueness, but the limitations were removed in 11.1.0.7.

In my case there were 10 distinct values for id1, just one value for ind_pad, and 20 distinct values for id2 – but a total of only 20 distinct values for the combination. With an index in place on the combination the optimizer used the value 20 in its calculation, in the absence of the index it used the value 200 – that factor of 10 led to a drop in the join cardinality estimate from 484 rows to 52 rows – at which point the optimizer calculations made the next step in the plan change from a hash join to a nested loop join.

If you want to reproduce the demo, here’s the full script – the data isn’t a realistic data set, and I’ve had to use various non-standard settings to make the script as repeatable as possible – I’ve built the data set in a tablespace using an 8KB block size, 1MB uniform extents and manual (freelist) segment space management.


rem
rem     Script:         index_sanity.sql
rem     Author:         Jonathan Lewis
rem

drop table t3;
drop table t2;
drop table t1;

execute dbms_random.seed(0);

begin   
        begin           execute immediate 'purge recyclebin';
        exception       when others then null;
        end; 

        begin
                dbms_stats.set_system_stats('MBRC',16);
                dbms_stats.set_system_stats('MREADTIM',10);
                dbms_stats.set_system_stats('SREADTIM',5);
                dbms_stats.set_system_stats('CPUSPEED',1000);
        exception
                when others then null;
        end;

end;
/

create table t1
as
select
        mod(rownum,10)          id1,
        mod(rownum,20)          id2,
        rpad('x',40,'x')        ind_pad,
        mod(rownum,100)         n1,
        lpad(rownum,10,'0')     small_vc,
        rpad('x',50)            padding
from
        all_objects
where
        rownum  <= 4000
;

create table t2 
pctfree 99
pctused 1
as
select
        mod(rownum,10)          id1,
        mod(rownum,20)          id2,
        rpad('x',40,'x')        ind_pad,
        mod(rownum,100)         n1, 
        lpad(rownum,10,'0')     small_vc,
        rpad('x',200)           padding
from
        all_objects
where
        rownum <= 20
;

create table t3
pctfree 95
pctused 1
as
select
        rownum          id,
        rpad(rownum,10) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 5000
;
begin
        dbms_stats.gather_table_stats(
                ownname => user,
                tabname => 'T1',
                method_opt => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname => user,
                tabname => 'T2',
                method_opt => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname => user,
                tabname => 'T3',
                method_opt => 'for all columns size 1'
        );

end;
/

create        index t1_i1 on t1(id1, ind_pad, id2) pctfree 91;
create        index t2_i1 on t2(id1, ind_pad, id2) pctfree 91;
alter table t3 add constraint t3_pk primary key (id);

set autotrace traceonly explain

select
        t1.small_vc, t2.small_vc, t3.small_vc
from
        t1, t2, t3
where
        t1.n1 between 40 and 50
and     t2.id1 = t1.id1
and     t2.ind_pad = t1.ind_pad
and     t2.id2 = t1.id2
and     t3.id = t1.id1
;

-- alter index t1_i1 invisible;
-- alter index t2_i1 invisible;

drop index t1_i1;
-- drop index t2_i1;

accept X prompt "Press return to coninue"

select
        t1.small_vc, t2.small_vc, t3.small_vc
from
        t1, t2, t3
where
        t1.n1 between 40 and 50
and     t2.id1 = t1.id1
and     t2.ind_pad = t1.ind_pad
and     t2.id2 = t1.id2
and     t3.id = t1.id1
;

set autotrace off

You’ll notice from the commented lines in the above that the effect appears whether you drop the index or make it invisible, also that there’s a similar index on the t1 table that matches the index on the t2 table – I could get the effect from dropping or making invisible either index.

There is a saving grace in 11g – if I do drop, or make invisible, one of these indexes I can protect myself against the statistical effect by create a column group on the same set of columns, and the num_distinct from the column group would serve the same purpose as the distinct_keys from the index.


database option - Spatial and Graphics

Pat Shuff - 19 hours 4 min ago
Today we are going to focus on the Spatial and Graphics option of the Oracle Database. Most business information has a location component, such as customer addresses, sales territories and physical assets. Businesses can take advantage of their geographic information by incorporating location analysis and intelligence into their information systems. The geospatial data features of Oracle Spatial and Graph option support complex geographic information systems (GIS) applications, enterprise applications and location services applications. Oracle Spatial and Graph option extends the spatial query and analysis features included in every edition of Oracle Database with the Oracle Locator feature, and provides a robust foundation for applications that require advanced spatial analysis and processing in the Oracle Database. It supports all major spatial data types and models, addressing challenging business-critical requirements from various industries, including transportation, utilities, energy, public sector, defense and commercial location intelligence.

The Spatial home page is a good starting point to learn more about the technology. Books that cover this topic are

Note that most of these books are three years old or older. Spatial has not changed much between 11g and 12c so the older books are still relevant. The key to the Spatial component is being able to define objects using geospatial tags. To achieve this, Oracle extended the database with the SDO_GEOMETRY data type. This is used just like an INTEGER or CHAR declaration for a variable but it contains a latitude and longitude element to define where something is located. Some sample code that we can lift from the Pro Oracle Spatial book looks like
SQL> CREATE TABLE  us_restaurants_new
(
  id                    NUMBER,
  poi_name       VARCHAR2(32),
  location         SDO_GEOMETRY    -- New column to store locations
);
This creates a table that defines an entry that helps us find where the restaurant is located. We can populate this entry with
SQL> INSERT INTO  us_restaurants_new  VALUES
(
  1,
  'PIZZA HUT',
  SDO_GEOMETRY
  (
    2001,  -- SDO_GTYPE attribute: "2" in 2001 specifies dimensionality is 2.
    NULL,    -- other fields are set to NULL.
    SDO_POINT_TYPE  -- Specifies the coordinates of the point
    (
      -87,  -- first ordinate, i.e., value in longitude dimension
      38,  -- second ordinate, i.e., value in latitude dimension
      NULL  -- third ordinate, if any
    ),
    NULL,
    NULL
  )
);
This inserts and entry for restaurant number 1, labeled PIZZA_HUT, and the location is defined by a point located at -87, 38. Note that these are relative locations defined in relation to a map. We use the SDO_GTYPE to define what type of mapping that we are using and how we are describing the location for this store.

The key benefit to this is that we can define restaurants and things like interstates. We can query the database by asking for any reference that is half a mile from the interstate. This is done with the following query

SQL> SELECT poi_name
FROM
  (
    SELECT poi_name,
      SDO_GEOM.SDO_DISTANCE(P.location, I.geom, 0.5) distance
    FROM us_interstates  I, us_restaurants  P
    WHERE I.interstate = 'I795'
      ORDER BY distance
  )
WHERE ROWNUM <= 5;

POI_NAME
-----------------------------------
PIZZA BOLI'S
BLAIR MANSION INN DINNER THEATER
KFC
CHINA HUT
PIZZA HUT
The select statement does a distance calculation looking at the distance between the interstate labeled I795 and any restaurant in the database. Note that we could have selected an address on the interstate and found something that is less than a specified distance. This is typically how something like Google Maps works. It uses your current location which is read from your phone as a latitude, longitude, and elevation and shows you the search term close to you. This allows you to easily find banks, places to eat, places to get gas, or an address that you are trying to get to.

We can not only look for distances relative to a point or a line (as we did with the interstate) but we can draw shapes around an object and look for things that fall into or out of the shape. For example, if we get the GSP points for a park, we can draw a shape that defines the park using latitude and longitude points. We can then look for related objects inside the park, outside the park, or within a few feet of the park. This helps police look for crimes that happen surrounding a park and react appropriately. In the database we define an object with spatial data and draw a shape around the object. A simple way of doing this is a simple box. The code to do this would look like

SQL> INSERT INTO  USER_SDO_GEOM_METADATA  VALUES
(
  'CUSTOMERS',      -- TABLE_NAME
  'LOCATION',       -- COLUMN_NAME
  SDO_DIM_ARRAY     -- DIMINFO attribute for storing dimension bounds, tolerance
  (
    SDO_DIM_ELEMENT
    (
      'LONGITUDE',  -- DIMENSION NAME for first dimension
      -180,         -- SDO_LB for the dimension
      180,          -- SDO_UB for the dimension
      0.5           -- Tolerance of 0.5 meters
    ),
    SDO_DIM_ELEMENT
    (
      'LATITUDE',   -- DIMENSION NAME for second dimension
      -90,          -- SDO_LB for the dimension
      90,           -- SDO_UB for the dimension
      0.5           -- Tolerance of 0.5 meters
    )
  ),
  8307              -- SRID value for specifying a geodetic coordinate system
);
You can define a data type as
  • Point
  • Line string
  • Polygon
  • Polygon with a hole
  • Collection (a combination of all of the above)
  • Compound line string
  • Compound polygon
  • 3d Composite surface
  • 3d Simple solid
  • 3d Compound solid
  • 3d Collection (a combination of all 3d objects)
When you define a spatial object it uses the SDO_GEOMETRY structure. This structure contains an SDO_GTYPE that defines if the object is 2d or 3d as well as the data type (0 = Uninterpreted type, 1 = Point, 5 = Multipoint, 2 = Line, 6 = Multiline, 3 = Polygon/surface, 7 = Multipolygon/multisurface, 4 = Collection, 8 = Solid, 9 = Multisolid). An entry of 2001 would be a 2d object, designated by the 2, that is a single points, designated by the 1. If this entry were 2002 it would be a 2d object that is a series of points to create a line, designated by the second 2. The SDO_SRID defines specifies the spatial reference system, or coordinate system, for the geometry. We can have a relative coordinate system or use latitude and longitude for coordinates. The SDO_POINT attribute specifies the location of a point geometry, such as the location of a customer. This gives us a reference point to work from and the rest of the data is the relative information based on the SDO_SRID. For example, we can draw a polygon defining a park starting at the northwest corner of the park. The SDO_POINT will provide the northwest corner of the park. The SDO_ELEM_INFO and SDO_ORDINATES attributes describe the polygon around the park. For more detailed examples, look at Chapter 3 of Pro Oracle Spatial for Oracle Database 11g.

We are not going to go into deep detail on how to program Spatial. It is recommended that you look at

It is important to note that Spatial is an optional package that runs on the Enterprise Edition database. If you are going to run this in the cloud you need to use the High Performance Edition or Extreme Performance Edition. If you are going to run this on IaaS you need to purchase the option on top of your database license and the processor metrics need to match. For example, if you run on a 2 virtual core system in the cloud, you need a 2 virtual core license for Spatial as well. You can not run Spatial on Amazon RDS because they disable this feature.

In summary, Spatial and Graphics are optional packages that help you do locational queries against a database. Spatial is not unique to Oracle but the structures and select statements typically do not cross database types but does work with products like Golden Gate to replicate data to other database spatial structures and queries. Spatial is a very powerful package that simplifies select statements that would be very complex otherwise. Finding distance between objects or distances from a line (highway for example) or distances from a polygon (park for example). If your application needs Spatial you need to select the High Performance or Extreme Performance editions.

Links for 2016-06-27 [del.icio.us]

Categories: DBA Blogs

Out for a while

Anthony Shorten - 20 hours 39 min ago
Due to some medical issues I will not be posting till September this year. Thank you for your patience.

Pella Showcases Success with Oracle Service Cloud at Call Center Week 2016

Linda Fishman Hoyle - Mon, 2016-06-27 15:40

The Contact Center Week Conference and Expo is going on in Las Vegas this week. The Oracle Service Cloud team is demonstrating how its modern service solutions help organizations deliver exceptional customer experiences. They are also hosting a session with Pella Corporation, which will showcase success with Oracle Service Cloud. Pella is a leader in technology and product innovation as a window and door manufacturer.

Pella will share how Oracle Service Cloud has helped its contact center agents and field technicians deliver consistent, high-quality customer experiences across channels including web, phone, chat, social media and during in-person appointments.  According to Rick Hassman, CIO, Pella, “Having these capabilities provides an experience that our competition can’t deliver.”

You can read more about Pella’s story in this Forbes OracleVoice article.

datapump export using DBMS_DATAPUMP

Tom Kyte - Mon, 2016-06-27 11:06
Good afternoon, it is a beginning programmer. I had a problem with DBMS_DATAPUMP.data_filter. Data_filter not work. create PROCEDURE exp_tables_w_qfilter ( p_schema_name IN VARCHAR2, -- 'LTP' p_table_name IN VARCHAR2, -- 'AUDITTABLE_TEST' p_t...
Categories: DBA Blogs

Transpose Rows into Columns

Tom Kyte - Mon, 2016-06-27 11:06
I have a table like: Client Process Date Status A B 21-june Y C D 22-june N A B 22-june N...
Categories: DBA Blogs

2 highest salary department wise ..alternate method

Tom Kyte - Mon, 2016-06-27 11:06
<b>select * from employees e1 where :parameter=(select count(distinct e2.salary) from employees e2 where e1.salary<=e2.salary);</b> --<i>hi can anyone explain me this query .i know this can be solve using rank,dense_rank but confused while ...
Categories: DBA Blogs

Services -- 4 : Using the SERVICE_NAMES parameter (non-RAC, PDB)

Hemant K Chitale - Mon, 2016-06-27 09:55
In my previous blog post, I have demonstrated using DBMS_SERVICE to create and start services.

But there is another way.  Using the SERVICE_NAMES parameter.  Going by most google references, SERVICE_NAME seems to be more popular than DBMS_SERVICE.

Can it be used in a Pluggable Database ?  Let' try.

[oracle@ora12102 Desktop]$ . oraenv
ORACLE_SID = [oracle] ? CDB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12102 Desktop]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 27 22:39:59 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 973082144 bytes
Database Buffers 654311424 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:43:36

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=14129))
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


SQL> connect system/oracle@PDB1
Connected.
SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> alter system set service_names='NEW_SVC_1,NEW_SVC_2';
alter system set service_names='NEW_SVC_1,NEW_SVC_2'
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database


SQL> !oerr ora 65040
65040, 00000, "operation not allowed from within a pluggable database"
// *Cause: An operation was attempted that can only be performed in the root
// container.
// *Action: Switch to the root container to perform the operation.
//

SQL>


So, apparently, ORA-65040 means that we can't define SERVICE_NAMES for a PDB.

SQL> connect sys/oracle as sysdba
Connected.
SQL> show con_id

CON_ID
------------------------------
1
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> alter system set service_names='NEW_SVC_1,NEW_SVC_2';

System altered.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:53:56

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=14129))
Service "NEW_SVC_1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_SVC_2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ tnsping NEW_SVC_1

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:54:57

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_SVC_1)))
OK (30 msec)
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@new_svc_1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 27 22:55:04 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Enter user-name: system/oracle@new_svc_1
Last Successful login time: Mon Jun 27 2016 22:44:13 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
1
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL>


The new Services are actually running in the ROOT, not in the PDB .  Unlike my earlier blog post where I used the new services to connect to the PDB.

Therefore, for a PDB environment, I'd use DBMS_SERVICE as I've demonstrated here and here.
.
.
.



Categories: DBA Blogs

Digital Engagement: What Every Business Needs to Know

WebCenter Team - Mon, 2016-06-27 08:13

The post below, originally authored by Martin Cookson, Director of Mobility at Oracle EMEA and posted on the Oracle EMEA blog, does a great job at describing the challenges organizations are facing when it comes to digital experience and engagement, and what you can do to ensure your digital engagement strategy is successful. Martin's blog post also sets the stage nicely for topics we'll be discussing in the coming weeks and months around how you can use digital experience solutions in the cloud to modernize employee engagement, customer experience and applications infrastructure. We've got a lot of great use cases around these topics we can't wait to share with you, so please stay tuned!

Author: Martin Cookson, Director of Mobility at Oracle EMEA

Digital is throwing up new challenges and opportunities for businesses. On the one hand companies face challenges from disruptive ‘digitally native’ competitors, rapidly taking significant market share, or worse: redefining the traditional market entirely. Take that now classic example of Amazon, which revolutionized online retail with 1-Click Ordering and has subsequently launched Amazon Web Services. On the other hand, companies are finding that they can create significant new revenue streams and enter new markets through the use of digital technologies within their traditional businesses. Chemicals company Monsanto, for example, has branched out into the delivery of data science technologies for the agriculture sector.

The digital revolution is also an opportunity for any business to transform its own operations and in doing so, find new markets, develop new business models, and engage with consumers in innovative ways. To do this, however, companies must understand the nature of digital engagement and the possibilities that it affords.

Digital engagement in action

Digital engagement is of course driven by leveraging technology but success is about ensuring positive outcomes. It encompasses creating new revenue streams, seeking out new business opportunities, finding a competitive edge, creating efficiencies, and better serving customers.

One great example of an established business benefiting from digital engagement is Floyd Medical Center in the US, which is using its new web experience management platform to improve the way it communicates. Floyd Medical Centre today uses its web presence as an important community resource and marketing tool as it faces up to the challenge of increased competition in the region.

The healthcare system also includes an intranet that provides important information and resources for employees across the organization, as well as for departmental teams. Employees can now securely access the intranet, which is updated with new content daily, either in the office or remotely; while improved search and segmentation capabilities enable employees to find internal content more quickly. The result is a more effective workforce better able to service customers and deliver a competitive edge.

One of Europe’s largest and busiest airports, meanwhile, is using beacon technology to track travellers across its premises. Not only can the airport know exactly where an individual is at any time, but it can use the data it gathers to have new interactions with people at every stage of their journey. This enables it to target people with tailored information, whether it’s marketing messages in duty free, hotel and taxi recommendations at arrivals, or relevant advice at check-in or immigration.

Finally, News Limited in Australia has overhauled its publication systems and processes to remain competitive in the digital age; particularly to provide content designed for mobile devices and to introduce new services, such as paywalls for premium content. Its new digital content management platform allows the company to publish stories in less than 90 seconds, compared to 15 minutes in the past, allowing the company to stay on top of breaking news in real-time. News Limited has also added new features, such as story and gallery wizards and intelligent, internal search and automated editorial-content enhancement functions that have increased the quality of content and enabled greater customer satisfaction and higher subscription numbers.

Intelligent, in-the-moment customer experiences

So what are these businesses doing that sets them apart from their competitors? Or to put this question another way: what can businesses do to ensure their digital engagement strategies are successful?

First, businesses must adopt a mobile first strategy and engineer their services from the outside in (i.e. think with mobile devices and end user needs in mind). Businesses should then look to create value and loyalty while reducing the cost of operations by building intelligent, in-the-moment experiences across different channels simultaneously, in-context and collaboratively. This is all about knowing the user; not just about who they are and what they like etc., but what they are doing now; where they are and who and what they are near.

This is why the above-mentioned airport is enjoying such success from its beacon technology – it enables a service that is all about the customer at a precise moment in time, it meets a need, in context and through the perfect engagement channel. This is what digital engagement is all about – exceptional customer experiences through intuitive services that build customer engagement and loyalty.

Enabling digital engagement

Any business serious about creating disruptive digital engagement needs to adopt a mature digital engagement platform. The agile nature of digital engagement means that any such platform must be based on cloud computing technology. This is because only the cloud can deliver the flexibility, integration capabilities and speed businesses need to innovate and take engagement models to market before their competitors.

The cloud is also where many of the services digital engagement platforms leverage are based – such as Facebook, Google Maps and Apple Notifications and Twitter – and it makes sense to build the platform as close to these services as possible. Finally, the cloud delivers the cost-effective business models and appropriate operations models (for example, DevOps), required to make digital engagement a success.

Today’s cloud for tomorrow’s needs

However, cloud platforms are not all created equal. As businesses look to enable digital engagement, they must select cloud platforms capable of accelerating their digital transformation, integrating core business systems with modern engagement and delivery platforms, and giving them a competitive edge. This requires a holistic digital engagement platform comprising middleware and database software that delivers against the core areas of digital engagement, such as mobile, the Internet of Things, content and collaboration, digital web experience and business process agility. These are all elements that tie directly to building and deploying transformational omni-channel customer experiences.

It is crucial to point out that the right digital engagement platform is not simply an enabling technology - it is a strategic engine of innovation and absolutely essential to creating compelling customer services. The right cloud platform provides application developers within businesses with the ability to test and develop new and innovative customer services in low-cost, intuitive cloud environments, using pre-configured development tools. This approach dramatically reduces the cost and time associated with innovation.

Moreover, a hybrid cloud approach, which brings together both public and private cloud elements in combination with on-premises systems, enable businesses to rapidly scale up applications that have been proven to be successful within the test environment. It is an ideal approach for businesses that due to regulatory or policy reasons have not been able to move to the public cloud as fast as they would like. In today’s environment all businesses need to think of themselves as start-ups and look to deploy cloud platforms that help them innovate at speed and low cost – whether that is public, private or hybrid solutions.

No-one has a natural right to the digital future. Businesses will have to fight hard, be brave and experiment. The rewards are there to be won, but those who have no stomach digital transformation will find themselves in a fight for survival.

Pages

Subscribe to Oracle FAQ aggregator