Feed aggregator

Intoduction to Oracle Rest Data Services

DBA Scripts and Articles - Tue, 2017-11-07 03:06

Presentation First, let’s talk about REST apis. REST apis are very popular nowdays because they are easily accessible from any client able to make http(s) requests. The beauty of this is that you can access different systems through these rest apis independantly of how the system is built in the backend. For example I can … Continue reading Intoduction to Oracle Rest Data Services

The post Intoduction to Oracle Rest Data Services appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Performance comparison with delete and insert into backup table or using Oracle 12c archival feature

Tom Kyte - Tue, 2017-11-07 03:06
<code>Hi, Which of the below mentioned approaches should be better performance wise? The Update vs Insert/Delete both take almost equal time ... Is this correct or am I doing something wrong? I know about the limitation of Ora archive as well ...
Categories: DBA Blogs

cost of the query is too much. it is taking around 50 hours to run the query.

Tom Kyte - Tue, 2017-11-07 03:06
<code>when i am running below query, it is taking around 50 hours in dev environment. same query is completed in other environment with in a minute. data is same in both the environments. Any thoughts? delete FROM uet_extract_clinical uec W...
Categories: DBA Blogs

SQL to identify duplicates of data groups.

Tom Kyte - Tue, 2017-11-07 03:06
Hi, I have a peculiar requirement to identify duplicates of data groups. So here is a sample data set. <code> PI BU PR AC ---------- ---------- -------------------- ---------- 1001 100 PR1 ...
Categories: DBA Blogs

UNIQUE LOCAL (Partitioned) Index

Hemant K Chitale - Mon, 2017-11-06 21:44
It is easy to create a default GLOBAL Index that is defined as a Unique Index on a column (or composite of columns) containing unique values.

But what if you have a Partitioned Table and want to create a LOCAL (i.e. equi-partitioned with the table) Index as a Unique Index ?  Are there any constraints ?

Let me demonstrate a Partitioned table listing Users by Region Code.

SQL> create table users
2 (region_code varchar2(3),
3 username varchar2(30),
4 account_status varchar2(32),
5 created date,
6 profile varchar2(128))
7 partition by range (region_code)
8 (partition a_m values less than ('N'),
9 partition n_r values less than ('S'),
10 partition s_z values less than (MAXVALUE))
11 /

Table created.

SQL>
SQL> insert into users
2 select substr(username,1,3), username, account_status, created, profile
3 from dba_users
4 /

39 rows created.

SQL>
SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','USERS');

PL/SQL procedure successfully completed.

SQL>
SQL> col partition_name format a30
SQL> select partition_name, num_rows
2 from user_tab_partitions
3 where table_name = 'USERS'
4 order by partition_position
5 /

PARTITION_NAME NUM_ROWS
------------------------------ ----------
A_M 18
N_R 9
S_Z 12

SQL>


I now verify that I can create a GLOBAL (non-partitioned) Unique Index on USERNAME.

SQL> create unique index users_username_u1 on users(username) global;

Index created.

SQL> drop index users_username_u1;

Index dropped.

SQL> create unique index users_username_u1 on users(username);

Index created.

SQL> drop index users_username_u1;

Index dropped.

SQL>


I now verify that I can create a Non-Unique LOCAL Index (being equi-partitioned, the index is partitioned by REGION_CODE).  (Being equi-partitioned, the default behaviour is the Index Partition Names inherit from the Table Partition Names).

SQL> create index users_username_l1 on users(username) local;

Index created.

SQL> select partition_name, num_rows
2 from user_ind_partitions
3 where index_name = 'USERS_USERNAME_L1'
4 order by partition_position
5 /

PARTITION_NAME NUM_ROWS
------------------------------ ----------
A_M 18
N_R 9
S_Z 12

SQL>
SQL> drop index users_username_l1;

Index dropped.

SQL>


I've proven (with the GLOBAL Index) that USERNAME is Unique across the whole table.  Can I create a Unique LOCAL Index on this column ?

SQL> create unique index users_username_u_l1 on users(username) local;
create unique index users_username_u_l1 on users(username) local
*
ERROR at line 1:
ORA-14039: partitioning columns must form a subset of key columns of a UNIQUE index


SQL>


Note the Error Message. The Partition Key must be a subset of the Unique Index columns.  Let me try adding the Partition Key  (in my example table, the Partition Key is a single column -- it could be a composite of multiple columns.  In that case all the columns of the Partition Key must for a subset of the Unique Index).

SQL> create unique index users_rc_un_u_l1 on users(region_code, username) local;

Index created.

SQL> select partition_name, num_rows
2 from user_ind_partitions
3 where index_name = 'USERS_RC_UN_U_L1'
4 order by partition_position
5 /

PARTITION_NAME NUM_ROWS
------------------------------ ----------
A_M 18
N_R 9
S_Z 12

SQL> drop index users_rc_un_u_l1;

Index dropped.

SQL> create unique index users_un_rc_u_l1 on users(username, region_code) local;

Index created.

SQL> select partition_name, num_rows
2 from user_ind_partitions
3 where index_name = 'USERS_UN_RC_U_L1'
4 order by partition_position
5 /

PARTITION_NAME NUM_ROWS
------------------------------ ----------
A_M 18
N_R 9
S_Z 12

SQL>

It doesn't matter if I put the Partition Key (REGION_CODE) before or after the target column(s) (USERNAME) when I create the LOCAL Unique Index.  What is necessary is that the Partition Key be a subset of the Unique Index definition.
.
.
.

Categories: DBA Blogs

SQL Profile to fix slow inserts

Bobby Durrett's DBA Blog - Mon, 2017-11-06 14:27

I was on call Sunday and got paged about a job that normally runs for 10 to 15 minutes but had run for 5 hours already. I used the coe_xfr_sql_profile.sql script from SQLT to speed up an insert statement that was part of the job. We restarted the job and it completed in a few minutes.

I have written a number of posts about the use of coe_xfr_sql_profile.sql. Sunday’s issue was most like the post on using hints with coe_xfr_sql_profile.sql.

The first thing I did was look at the execution history of the problem insert statement which had sql_id = ‘ba9w9cjy87hd8’. I used my sqlstat.sql script.

SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
ba9w9cjy87hd8      3341942963 05-NOV-17 07.00.03.091 AM                2         1302824.53        1162165      572.8985                  0                  2.271                61.1125             6077761               1021                    8.5
ba9w9cjy87hd8      3341942963 05-NOV-17 10.00.23.889 AM                3          846842.04     844443.333       108.995                  0                      0                      0          4475478.33         196.333333             5.66666667

Even though this job runs every weekend and we keep 6 weeks of history in the AWR ba9w9cjy87hd8 only showed up yesterday. I checked the SQL and it did not have any constants. It uses bind variables so the same sql_id should apply every week. My guess is that it normally runs so fast that it did not get recorded in the AWR on previous weeks but this week it chose a terrible plan. I see this a lot with SQL statements that have bind variables. I think that the bad execution plan gets chosen based on some abnormal bind variable value and then it sticks in memory.

The other thing I noticed while looking at the session for this job is that it was not using any parallel processes. I looked at the plan and there were no parallel steps. This system uses a lot of parallel query so I suspected that a parallel plan was the normal plan and the bad plan somehow came out with serial execution.

I checked our test databases to see if they had any execution history for this SQL and they did not. So, I was stuck trying to get a better plan and then using a SQL profile to force that plan on the insert statement. I ended up running the explain plan statement without giving the bind variable a value and it came out with a parallel plan. I used the outline hint from that plan to force the insert statement to run the parallel plan against a test database. I ran the insert with a rollback statement so it wouldn’t commit the new rows. Also I picked a bind variable value from the production database’s AWR to run with my test query.

Next I decided to run my test insert statement against production with the rollback statement. The first time I tried the statement it hung on a TM enqueue. I think this was because the insert was using an append hint and the running job was holding a TM lock. After we killed the production job I could run my test insert with the rollback statement.

I had to get the parallel plan into memory or into the AWR so that coe_xfr_sql_profile.sql would pick it up based on the plan hash value. It felt weird running an insert statement for a production table but I tested it on a test database first and made sure it would be rolled back.

I ran this select after the insert and rollback to get the plan for the insert in production:

select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

Part of the output of the select included the plan hash value of the plan that my test insert used:

Plan hash value: 3809184161

Finally, I forced the good plan which had hash value 3809184161 onto the sql_id ba9w9cjy87hd8  using coe_xfr_sql_profile.sql. Then the job ran in a few minutes.

Bobby

Categories: DBA Blogs

Introduction to Oracle Developer Cloud Service Issue Tracking REST Interfaces

Shay Shmeltzer - Mon, 2017-11-06 13:38

The task tracking system in Oracle Developer Cloud Service (DevCS) helps your team manage your development priorities and process. DevCS offers a simple web interface for working with the system. However, in some cases you might want to build your own interfaces to interact with the issues. For example, you might want to build a system for end-users to report bugs in your app and you don't want to give them direct access to the DevCS web insterface. In the August 17 update of DevCS  we introduced a set of REST services that will let you build a custom interface that will interact with our issues repository.

The official documentation for the DevCS REST services is here.

I wanted to share some tips to help you get this going in your project. The results are in this short video demo, and the details are below.

Figuring Out The End Points

The documentation gives you the basic end-points you should be calling, but it took me a little bit of time to figure out the full URL to the end point. Turns out the URL is composed in the following way:

https://server/org-id/rest/org-id+project-id/issues/v2/issues

The first parts (server/org-id) are quite easy to get - just copy it from the URL of your project when you look at it in your browser.

The org-id+project-id part is something you can get by looking at the details of your maven repository URL - see the image below - what you are looking for is the part before the /maven/ at the end:

Note that in some projects this will also include a numeric value appended to the project name. Something like developer-oracletemplates_db-oss-devops_20266.

In the video sample below the result URL for the REST that returns the list of issues currently in the system ended up being:

https://myserver/developer-oracletemplates/rest/developer-oracletemplates_adf1221/issues/v2/issues

Creating New Issues

One of the useful services is the /issues/v2/issues/create-form service. It returns a json file that you can edit to specify information about a new task that you want to create.

Note that the file start with : {"createIssue":{"links":.... Before you use the file to insert a new issue, you'll need to remove the  {"createIssue": at the start and the corresponding } at the end of the file. Only then can you use it to submit the POST operation to create an issue.

In the video I used the following command to create the issue in the DevCS:

curl -X POST -u shay@oracle.com https://myserver/developer-oracletemplates/rest/developer-oracletemplates_adf1221/issues/v2/issues/ -d@issue.json -H 'Content-type:application/json'

(the -d allows you to specify the name of the file with the new issue, and the -H specifies the content format).

Now that you have access to the information you can create new systems on top of it using your favorite development tool. At the end of the video you can see a simple issue system I built with Oracle Visual Builder Cloud Service - more on that in a future blog entry.

 

Categories: Development

nVision Performance Tuning: 6. Logging Selector Usage

David Kurtz - Mon, 2017-11-06 13:09
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Static selectors are tracked by entries in the PSTREESELCTL table.  It is maintained after the tree is extracted to the selector table.  The version number on PSTREESELCTL is compared with the corresponding version number on the PSTREEDEFN to determine whether the extract of the tree to the selector table is still valid, or whether it needs to be reextracted because the tree has been updated.  Selectors that do not have an entry in PSTREESELCTL are therefore dynamic.
Static selectors are left in the PSTREESELECTnn table after the report completes so that they can be reused.  However, many customers periodically create new effective dated versions of many trees, and so static selectors on old effective dated versions of the tree will accumulate as there is nothing to purge them.
Dynamic selectors are extracted every time the report runs.  They should normally be deleted by nVision before the report finishes.  However, if a report crashes dynamic selectors can be left behind.  That creates a number of problems
  • The size of the selector tables will tend to increase over time as old static and dynamic selectors are left behind.
  • That in turn affects the statistics on these tables.  The range of values for SELECTOR_NUM will become wider faster than the number of rows in the table grows.  The minimum value will either be the oldest static selector number, or the oldest dynamic selector left after a crash.  The maximum value will be the last selector number inserted when statistics were collected.
Therefore, it is useful to be able to track creation and deletion of dynamic selectors by the various nVision reports and queries.  I have therefore created a logging table PS_NVS_TREESLCTLOG (see nvision_dynamic_selectors.sql), a PL/SQL package XX_NVISION_SELECTORS and compound DML triggers on every tree selector table.
The column names in the log table have been chosen for compatibility with the PeopleSoft data dictionary, so that a record can be defined in Application Designer.
Column Name
Data Type
Description
SELECTOR_NUM
NUMBER
Unique identifier for tree selector records.
PROCESS_INSTANCE
NUMBER
PeopleSoft process instance number for nVision/Query
LENGTH
NUMBER
Length of tree selector
NUM_ROWS
NUMBER
Number of rows inserted into tree selector. 
Counted by the AFTER ROW part of the triggers.
TIMESTAMP
TIMESTAMP
Time when rows inserted
MODULE
VARCHAR2(64)
Module attribute of session inserting selector rows. 
APPINFO_ACTION
VARCHAR2(64)
Action attribute of session inserting selector rows
CLIENTINFO
VARCHAR2(64)
CLIENT_INFO attribute of session inserting selector rows.  This will include:
  • PeopleSoft Operator ID.
  • Name of the application server or process scheduler domain.
  • Name of the machine where the client process is executing.
  • Name of the client executable process.
STATUS_FLAG
VARCHAR2(1)
I=Selectors Inserted
S=Static Selectors Inserted
D=Selectors Deleted
X=Selectors Deleted and Partition Dropped
TREE_NAME
VARCHAR2(18)
Name of the tree from which selector extracted.
Obtained by querying statement from V$SQL.
OWNER_ID
VARCHAR2(8)
Schema under which nVision report run
PARTITION_NAME
VARCHAR2(128)
Name of partition where selectors stored
JOB_NO
NUMBER
Database Job number to collect statistics on the partition.
All of the logic is kept in the PL/SQL package because it is common to the triggers on all the tree selector tables.  Insert triggers track population of selectors and delete triggers track the successful removal of dynamic selectors.  After row triggers track the selector number and count the number of rows inserted.  After statement triggers call the logging procedures.
CREATE OR REPLACE TRIGGER sysadm.pstreeselect10_insert
FOR INSERT ON sysadm.PSTREESELECT10 compound trigger
l_err_msg VARCHAR2(100 CHAR);
AFTER EACH ROW IS
BEGIN
sysadm.xx_nvision_selectors.rowins(:new.selector_num,:new.range_from_10,:new.range_to_10);
EXCEPTION WHEN OTHERS THEN NULL;
END after each row;

AFTER STATEMENT IS
BEGIN
sysadm.xx_nvision_selectors.logins(10,'SYSADM');
EXCEPTION WHEN OTHERS THEN
l_err_msg := SUBSTR(SQLERRM,1,100);
dbms_output.put_line('Error:'||l_err_msg);
END after statement;
END;
/

CREATE OR REPLACE TRIGGER sysadm.pstreeselect10_delete
FOR DELETE ON sysadm.PSTREESELECT10 compound trigger
l_err_msg VARCHAR2(100 CHAR);
AFTER EACH ROW IS
BEGIN
sysadm.xx_nvision_selectors.rowdel(:old.selector_num);
EXCEPTION WHEN OTHERS THEN NULL;
END after each row;

AFTER STATEMENT IS
BEGIN
sysadm.xx_nvision_selectors.logdel(10);
EXCEPTION WHEN OTHERS
THEN
l_err_msg := SUBSTR(SQLERRM,1,100);
dbms_output.put_line('Error:'||l_err_msg);
END after statement;
END;
/
Once the decision to bear the overhead of triggers on the selector tables is made, there are then various pieces of additional information that can be captured and stored in the logging table for very little additional overhead.  It is easy to record the current session attributes such as module, action, and client_info.  The process instance number is captured on startup and can then be read by the psftapi package.  The tree is identified by scanning the V$SQL for the SQL that fired the triggers. 
It is also possible to maintain statistics on the selector tables.
Purging SelectorsThe selector log can be used to drive purging of selectors not cleared by nVisions that failed to complete.  The package includes a purge procedure to clear selectors that are not already marked as having been deleted either when the logged process instance is no longer running, or if there is no known process instance then if it is more than 2 days since the selector was inserted.  If the selector table is also interval partitioned, then the partition will be dropped.
A trigger on PSTREESELCTL (see pstreeselctl.sql) tracks the logs static selector maintenance.
The purge process is invoked by a trigger on PSPRCSRQST that fires when nVision process status is changed away from processing.  The selector purge process should also be scheduled to run daily.
The selector log itself is not purged as it contains useful information about tree usage.
Recommendations
  • If the tree selector tables are not partitioned create a histogram on SELECTOR_NUM.  However, up to Oracle 11g, this will not be effective as the number of distinct values reaches the maximum number of buckets, 254.  Dynamic selectors should be purged before this happens.  From Oracle 12c the maximum number of buckets is 8192, and hybrid histograms can be used.

Pass Summit 2017: how to bypass SQL Server security

Yann Neuhaus - Mon, 2017-11-06 09:49

Last Friday I saw a very interesting session in Pass Summit 2017 about how to Bypass, or Ensure, SQL Server security by Matt Martin.
Matt explained us how to bypass SQL Server security with the complicity of your SQL Server DBA.
Msdb is the most powerful database to get stuff done: mail, jobs… so let’s have a look how to take the power within a SQL Server instance.

Start a job under SQLAgentOperator role

SQLAgentOperator give you the right to execute all job even if it runs under sysadmin.
If your SQL Server DBA grant your login as a member of this role and if a job runs a Stored Procedure what you are able to modify you are the master of the box as you can grant your login as sysadmin.

Start a job from MSDB with parameter

The goal here is to ask you DBA to create a Stored Procedure with as parameter a job name in order to simplify process because you need to run several different jobs

Procedure like that:

USE msdb
GO
CREATE PROC dbo.sp_run_job
  @name varchar(500)
  WITH EXECUTE AS OWNER
AS
  exec sp_start_job @job_name = @name

With this kind of Stored Procedure you can do whatever you want on the server.

Linked Server Elevated privilege

If you have a linked server that logins into a foreign server with a sysadmin account, ask you DBA to enable RPC out for the linked server for a good reason like decrease the stress on both servers.
If you DBA does the job you can execute this kind of script:

EXEC(‘Alter server role sysadmin add member [ad\sts]’) at linked_Server1

Nice not?

I need a sandbox database that I’m DBO on

The goal for this one is to convince your DBA to create for you a sandbox database on a production server for a good reason like working with production data (a good reason?)…
Once done ask him to schedule a Stored Procedure you wrote on a daily basis.
If he does is just alter you Stored Procedure with the following code:

ALTER SERVER ROLE SYSADMIN ADD MEMBER [ad\sts]

Another way to become quickly sysadmin!

Xp_cmdshell to take over the box

Xp_cmdshell is running under SQL Server service account which often is Local admin of the box.
Ask your DBA to grant you permissions to xp_cmdshell and argue that you need to delete some files after importing them on the server.
If the account that runs SQL Server is a local Administrator, you can add your account as an Administrator of the server and whatever you want…

EXEC xp_cmdshell ‘net localgroup administrators ad\sts /add’

Oups…

How to close Loopholes

Never enable a business user for SQLServerOperator role.
Never use SA or another sysadmin account for linked server but set it to a reader login with minimum permission.
Never schedule a job that calls a SP where business user has alter access to.
Never allow a parameterized job call in MSDB, always hardcode the job name to avoid modification of this job name.

Just give the necessary permission!
Create a server trigger which send an Email to DBA when somebody becomes sysadmin on the instance. Like that DBA can look at this new account immediately and see if somebody tries to squeeze the system.

 

Cet article Pass Summit 2017: how to bypass SQL Server security est apparu en premier sur Blog dbi services.

Oracle Utilities Defines Cloud Path for Utilities

Oracle Press Releases - Mon, 2017-11-06 09:30
Press Release
Oracle Utilities Defines Cloud Path for Utilities Trusted industry leader showcases cloud expertise, solutions at new Cloud for Utilities Summit this week in Washington, D.C.

Cloud for Utilities Summit, Washington DC—Nov 6, 2017

Today Oracle Utilities announced it is bringing the benefits of cloud to D.C. as part of the inaugural Cloud for Utilities Summit. The conference offers Oracle Utilities, a leading provider of cloud applications to utilities, the opportunity to put its cloud imperative into practice: that vendors and utilities must collaborate to deliver on the promise of the cloud—a new, two-way, customer-centric utility of the future.

With over 100 SaaS clients worldwide, Oracle Utilities delivers one of the largest platforms of Software as a Service (SaaS) applications for utilities and is rapidly expanding its cloud suite, which already includes applications for work and asset management, AMI, smart meter and device management, and analytics. And, with the acquisition of Opower in 2016, Oracle Utilities further expanded its cloud platform with SaaS utility customer engagement and energy efficiency solutions.

While some major technology providers’ cloud applications require buy-in to large, hosted technology or data platforms that are not truly SaaS, Oracle Utilities has taken a different approach—delivering targeted, true cloud solutions that address utilities’ most pressing challenges, while being flexible and modular to support utility evolution and growth.

“We are not going to simply put a cloud stamp on hosted solutions, that’s not what our clients expect from a trusted and committed technology partner. It’s important that our utility clients have access to true SaaS applications that will deliver the greatest benefit, and that they have our support in defining the best cloud strategy that takes advantage of all cloud options available,” said Rodger Smith, senior vice president and general manager, Oracle Utilities.

Oracle Utilities is committed to delivering the cloud benefits its clients need quickly. In the past year alone, three brand new SaaS applications were released to meet clients’ needs: Oracle Utilities Work and Asset Cloud Service, Oracle Utilities Operational Device Cloud Service, and Oracle Utilities Digital Self Service (DSS). DSS was born in the Cloud as the first joint application with Opower and allows utilities to deliver deep digital customer engagement while connecting seamlessly to back office systems.

“This is just the beginning, we’re about to change the SaaS CIS market, too,” Smith added.

Oracle Utilities Customer Cloud Service is set to release in the next 12 months, and it will be the first truly SaaS, market-leading customer information system built for the utilities industry, with a comprehensive meter-to-cash-to-customer solution, standardized implementation and Oracle-delivered solution maintenance. This service is set to tackle some of the biggest challenges for utilities by lowering both risk and costs for implementation projects and technology maintenance.

“We continue to hear from the industry that true SaaS solutions are increasingly attractive,” added Marisa Uchin, senior director, regulatory affairs for Oracle Utilities. “As the utility industry continues to transform into more customer-centric and data-driven, digitized businesses, we are seeing a recognition by both utilities and regulators alike that true SaaS solutions provide the responsiveness, agility and customer-centric benefits needed to master this transformation. We are excited to continue our close work with utility partners and regulators to facilitate cloud adoption in a way that delivers benefits for both utilities and customers.”

Rick Cutter, Managing Director, Cloud for Utilities, said, “We are very happy to have Oracle as a partner in this endeavor. Oracle continues to lead in technology and business models for the utility industry.”

Contact Info
Valerie Beaudett
Oracle
+1 650.400.7833
valerie.beaudett@oracle.com
About Cloud for Utilities

The Cloud for Utilities organization is the preeminent resource for cloud business models focused on the utility industry. The organization focuses on the education, supporting career advancement, improving organization success and to support the advancement of cloud business models in the utility industry. The inaugural Cloud for Utilities Summit will be held Sunday November 5th through the Tuesday November 7th at the Mayflower Hotel in Washington DC. To learn more about the Cloud for Utilities Summit, visit CloudForUtilities.org.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle.  

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Need to convert Rows to Columns

Tom Kyte - Mon, 2017-11-06 08:46
How to convert the Rows to Columns I have Audit data as below <code>create table c_t (t_id varchar2(50), Change_Field varchar2(50),new_value varchar2(50), old_value varchar2(50), a_dt date); insert into c_t values('T1','Status','WIP','Pendin...
Categories: DBA Blogs

ORA-12899: value too large when inserting over DB link with different character sets

Tom Kyte - Mon, 2017-11-06 08:46
Hi Tom, We are using two databases with two different character sets those are 11.2.0.1.0 - 64bit = AR8MSWIN1256 12.1.0.2.0 - 64bit = AL32UTF8 Trying to insert a data from the 11.2 version to the 12.1 through a DBLINK but it gives 'ORA-12899: ...
Categories: DBA Blogs

ALTER SESSION ENABLE PARALLEL DML

Tom Kyte - Mon, 2017-11-06 08:46
Hi, I came across below code spinet: DECLARE .. .. BEGIN .. .. EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML' ; DELETE /*+ PARALLEL(TABLE_TEST, 4) */ FROM TABLE_TEST WHERE ID IN (SELECT ID FROM TABLE_2); END; Since EXECUTE ...
Categories: DBA Blogs

How to Create Datewise Monthly Sheet

Tom Kyte - Mon, 2017-11-06 08:46
Our environment is Database 10g 10.2.0.1.0, Forms 6i Operating System: Windows Server 2008R2 We have tables to record schedule for our distributors like following <code>create table distributor ( dcode int, distributor_name varchar2 (200)...
Categories: DBA Blogs

Is RAC one node option available in Standard Edition 2?

Tom Kyte - Mon, 2017-11-06 08:46
Hi Tom, According to Oracle Database Licensing Information User Manual, https://docs.oracle.com/database/121/DBLIC/editions.htm#DBLIC116, RAC one node option is not available in Standard Edition 2. However you can choose this option in the install...
Categories: DBA Blogs

How to Deserialize a HTTP POST RESPONSE

Tom Kyte - Mon, 2017-11-06 08:46
I am getting a response for a request(l_value) as below: <?xml version="1.0" encoding="utf-8"?> <boolean xmlns="http://tempuri.org/">false</boolean> How do I get the boolean value false and assign it to a variable lv_return <code> httpres...
Categories: DBA Blogs

Warning Event: <hostname.domain name>: <db name> DB - Failed logon attempts

Tom Kyte - Mon, 2017-11-06 08:46
OS_USER HOST_NAME TERMINAL USERID Login Date CLIENT_PROGRAM_NAME Error massage oracle hostname.domain name SYSTEM 11-02-2017 14:54:05 sqlplus@hostname.domain name (TNS V1-V3) ORA-1017:invalid user...
Categories: DBA Blogs

Oracle memory and processor requirement

Tom Kyte - Mon, 2017-11-06 08:46
Hello Tom, I'm new in database and just want to understand the memory and proc requirement for a database server. Here if I say that approx. 500-600 concurrent user are going to connect 12C database through application and size of database is ap...
Categories: DBA Blogs

Synchronize data from Oracle Se to Oracle EE

Tom Kyte - Mon, 2017-11-06 08:46
Dear All, Could anyone share with me either data from Oracle SE can be synchronize to Oracle EE? Regards
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator