Feed aggregator

Data base keys

Tom Kyte - Fri, 2016-11-11 00:26
Hi Tom, I am Oracle Beginner,I know integral constraints.So Please Explain me below concepts 1.What are candidate key,super key,Prime Attributes,non-prime attributes? 2.What is the difference between candidate and unique key? 3.how we can use ...
Categories: DBA Blogs

Auditing logons with V$SESSION.AUDSID in AWR

Tom Kyte - Fri, 2016-11-11 00:26
Hi Tom, I have got request from an audit company to monitor logon/off events. Of course in this case it would be really silly question and maybe it really is. The trick is that they want to also track program and module (v$session). This information...
Categories: DBA Blogs

Materialized view

Tom Kyte - Fri, 2016-11-11 00:26
Hi I am creating the Materialized view using COMPLETE REFRESH. how do I see the progress of the operation ? %age complete etc session longops is not showing as it depends on the operation type regards
Categories: DBA Blogs

Flashback Data Archive Limitation -- Please clarify

Tom Kyte - Fri, 2016-11-11 00:26
Hi Tom, I have observed a limitation with Flashback Data Archive. Once Flashback Data Archive is disabled for a table, all the information in History table until that point is lost. Once Flashback Data Archive is enabled again for the same table, ...
Categories: DBA Blogs

Oracle 12cR2: Pluggable database relocation

Yann Neuhaus - Thu, 2016-11-10 23:19

Here is, in my opinion, the most beautiful feature of the multitenant architecture. You know how I love Transportable Tablespaces. But here:

  • No need to put the source in read/only
  • No need to export/import the metadata logically
  • No need for any option: available even in Standard Edition

Standard Edition

I am in Standard Edition here in both source and target, no option required for this:

SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 10 13:40:05 2016
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

Source: PDB1 on CDB1

On server opc1 I have a container database CDB1 with one pluggable database PDB1 where I create a new table:

23:40:20 (opc1)CDB1 SQL>alter session set container=PDB1;
Session altered.
23:40:20 (opc1)CDB1 SQL>create table DEMO as select current_timestamp insert_timestamp,instance_name from v$instance;
Table created.
23:40:21 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance;
1 row created.
23:40:21 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1

Export encryption key

I’m in Oracle Public Cloud where tablespaces are encrypted. To ship a pluggable database I must export the keys. Here is the query to get them:

23:40:23 (opc1)CDB1 SQL>select key_id from v$encryption_keys where creator_pdbname='PDB1';
 
KEY_ID
------------------------------------------------------------------------------
AWlnBaUXG0/gv4evS9Ywu8EAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

And I can filter with this query to export it:

23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1')
*
ERROR at line 1:
ORA-28417: password-based keystore is not open

I can’t do that with auto-login wallet.

23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- -------------------------------------- ---------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ AUTOLOGIN

Let’s open the wallet with password:

23:40:23 (opc1)CDB1 SQL>administer key management set keystore close;
keystore altered.
23:40:23 (opc1)CDB1 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet;
 
WRL_TYPE WRL_PARAMETER WALLET_TY
-------- -------------------------------------- ---------
FILE /u01/app/oracle/admin/CDB1/tde_wallet/ PASSWORD

and re-try my export:

23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1');
keystore altered.

This file must be copied to the destination server. I did it with scp. You can also use dbms_file_transfer as you will need a database link anyway for the remote clone.

Import encryption key

On the destination server, where I have no CDB (I’m limited to one PDB here without the multitenant option)

23:40:31 (opc2)CDB2 SQL>show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO

I have to import the encryption key:

23:40:31 (opc2)CDB2 SQL>administer key management set keystore open identified by "Ach1z0#d";
keystore altered.
 
23:40:31 (opc2)CDB2 SQL>administer key management import encryption keys with secret "oracle" from '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d";
keystore altered.

I’m now ready to relocate my PDB as I’m sure I’ll be ready to open it.

Database link

The remote clone is done through a DB link. I’ve a TNS entry named CDB1:

23:40:31 (opc2)CDB2 SQL>select dbms_tns.resolve_tnsname('CDB1') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('CDB1')
--------------------------------------------------------------------------------
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=opc1)(PORT=1521))(CONNECT_DAT
A=(SERVER=DEDICATED)(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal)(CID=(PROGRAM
=oracle)(HOST=SE222.compute-opcoct.oraclecloud.internal)(USER=oracle))))
 
23:40:31 (opc2)CDB2 SQL>create database link CDB1 connect to C##DBA identified by oracle using 'CDB1';
Database link created.

DML on source

In order to show that the source doesn’t have to be read only as in previous release, I’m running the following inserts every 5 minutes:

23:40:44 (opc1)CDB1 SQL>commit;
Commit complete.
23:40:44 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance;
1 row created.
23:40:44 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
6 rows selected.

PDB remote clone

Here is the syntax.
I need to provide the masterkey of the source wallet.
The RELOCATE is this new feature where the source PDB will be relocated to the destination when the clone is opened.

23:40:48 (opc2)CDB2 SQL>create pluggable database PDB1 from PDB1@CDB1 keystore identified by "Ach1z0#d" relocate;
Pluggable database created.
23:41:08 (opc2)CDB2 SQL>

It took some time, shipping the datafiles through the DB link, but this is online.
I was still inserting during this time:

23:41:04 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
 
10 rows selected.

Note that you need to be in ARCHIVELOG and LOCAL UNDO to be able to do this because syncronisation will be made by media recovery when we open the clone.

Open the clone

Now, the theory is that when we open the clone, DML is quiesced on source during the recovery of the target and sessions can continue on the target once opened.

23:41:09 (opc2)CDB2 SQL>alter pluggable database PDB1 open;
alter pluggable database PDB1 open
*
ERROR at line 1:
ORA-00060: deadlock detected while waiting for resource
23:41:26 (opc2)CDB2 SQL>

Bad luck. Every time I tested this scenario, the first open after the relocate fails in deadlock and the session on the source crashes:

23:41:09 (opc1)CDB1 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
10-NOV-16 11.41.09.773300 PM +00:00 CDB1
 
11 rows selected.
 
23:41:14 (opc1)CDB1 SQL> commit;
ERROR:
ORA-03114: not connected to ORACLE

It’s a good occasion to look at the traces.
We can see some messages about the recovery:

*** 2016-11-10T23:41:12.660402+00:00 (PDB1(3))
Media Recovery Log /u03/app/oracle/fast_recovery_area/CDB1/foreign_archivelog/PDB1/2016_11_10/o1_mf_1_24_2025109931_.arc
Log read is SYNCHRONOUS though disk_asynch_io is enabled!

Those FOREIGN ARCHIVED LOG is a new type of file that you will see in the FRA in 12.2.

So I lost my session on source and now if I try again it works:

23:42:20 (opc2)CDB2 SQL>alter pluggable database PDB1 open;
Pluggable database altered.
23:42:24 (opc2)CDB2 SQL>select * from DEMO;
 
INSERT_TIMESTAMP INSTANCE_NAME
----------------------------------- ----------------
10-NOV-16 11.40.20.902761 PM +00:00 CDB1
10-NOV-16 11.40.21.966815 PM +00:00 CDB1
10-NOV-16 11.40.29.136529 PM +00:00 CDB1
10-NOV-16 11.40.34.214467 PM +00:00 CDB1
10-NOV-16 11.40.39.304515 PM +00:00 CDB1
10-NOV-16 11.40.44.376796 PM +00:00 CDB1
10-NOV-16 11.40.49.454661 PM +00:00 CDB1
10-NOV-16 11.40.54.532699 PM +00:00 CDB1
10-NOV-16 11.40.59.614745 PM +00:00 CDB1
10-NOV-16 11.41.04.692784 PM +00:00 CDB1
 
10 rows selected.

All the inserts that were commited on the source are there.
Even with this deadlock bug (SR 3-13618219421), it’s the easiest and fastest way to migrate a database, with the minimum of downtime. Especially in Standard Edition where transportable tablespaces import is not enabled.
Without the deadlock bug, the sessions on the source are supposed to be still running , only paused during the recovery, and then continue on the destination.

 

Cet article Oracle 12cR2: Pluggable database relocation est apparu en premier sur Blog dbi services.

Oracle CHARTOROWID Function with Examples

Complete IT Professional - Thu, 2016-11-10 12:11
In this article, I’ll explain what the Oracle CHARTOROWID function is and show you an example. Purpose of the Oracle CHARTOROWID Function The purpose of the CHARTOROWID function is to convert a value from a several character data types into a ROWID data type. The supported data types that this function can convert are: CHAR […]
Categories: Development

What a Difference a Year Makes

Linda Fishman Hoyle - Thu, 2016-11-10 11:05

A Guest Post by Oracle's CSO and GVP Jon Chorley, SCM Product Strategy (pictured left)

A lot can happen in a 12-month span. And I’m not talking about the US presidential election, BREXIT, or even the “battle of the long-time losers” which was the 2016 Baseball World Series. I’m talking about Oracle Supply Chain Cloud.

We delivered SCM Cloud as promised

When we delivered Release 11 in early 2016, we crossed a real watershed. I’m sure many of the attendees at OpenWorld 2015 were skeptical about our plan to roll out the most comprehensive and innovative built-for-the-cloud SCM suite.

But we did it. With applications as diverse as Order Management, Product Development, Manufacturing, and Planning and Logistics, our Supply Chain Cloud was a real option for customers who wanted to modernize and digitize aspects of their supply chain. It also was very appealing to those who wanted a comprehensive cloud solution, including supply chain across their entire business.

The result? We now have more than 1,000 customers who have purchased and are implementing our Supply Chain Cloud. What’s more, it’s part of almost every customer’s current IT planning process.

So how did this change the vibe among SCM customers at OpenWorld this year?

Every conversation that we in Development had with customers, partners and analysts was not an “if” conversation, but a “when” or “how” conversation. The fear, confusion, and uncertainty from 2015 was replaced by a desire to learn, plan, and act in 2016. Even concerns over Apps Unlimited versus cloud had dissipated.

Customers recognized that Oracle had made an enormous investment in building an all-new SCM cloud solution that is ready when they are.

We are solidly in execution mode. We’re building a skilled and experienced ecosystem. We’re producing assets to accelerate and de-risk deployments. And as a result, we have assembled a community of happy and referenceable customers.

Customers got a preview of more innovation to come

Having crossed one watershed, we felt it was important to show that we’re on to the next with a steady stream of innovation. After all, that’s why people move to the cloud—they want an SCM application platform that innovates at the speed of their business.

We were pleased to demonstrate several planned new products targeted for Release 13, as well as aspects of the Adaptive Intelligent Supply Chain and Internet of Things.

I suppose that, having put our reputation on the line at OpenWorld in 2015, we did the same thing this year. But that’s just how we roll.

Why a Complete Cloud Suite Matters to Your Business

Linda Fishman Hoyle - Thu, 2016-11-10 10:37

A Guest Post by Oracle’s Lisa Schwartz, Cloud Business Group (pictured left)

We tell customers all the time that Oracle has the most complete, unified cloud suite in the industry. But do you know why having a complete suite is so important?

The simple answer is because your customers expect your business to act as one business, not separate departments with discrete jobs. Employees and partners expect a unified business too, and in some cases, regulatory environments require it as well.

Customer expectations are very high

In a 2016 Harvard Business Review article entitled “Focus on Keeping Up with Your Customers, Not Your Competitors,” the author says the greatest challenge businesses have today is keeping up with customer expectations, not with competitors. Think about it: we don’t compare the customer service of one company to that of its competitors, but to the best service we’ve received anywhere.

Silos can ruin the customer experience

If you’ve ever had an unsatisfactory customer experience, it may have been due to departmental silos and disconnected business processes. Here are a few examples of how that can play out:

Your Preferences: Let’s say you receive a marketing promotion from your communications provider addressed to MS. Jane Doe, but the monthly bill lists you as MR. Jane Doe. Not a big deal, you say? Studies show that churn rates increase when customers have the impression a company doesn’t know them even after many years of doing business together.

Your Time: If you’ve ever had a family member with a serious illness, you know how frustrating it can be when the insurance company delays reimbursement. It might take you multiple calls to get through to the right person, and you have to repeat the same information every time. During the next benefits open enrollment period, you might decide to look at other health insurance options.

Your Information: Or maybe your credit score goes down because a payment processor acquires your auto loan, but your account information isn’t transferred properly and your payments aren’t processed. Next time you’re in the market for a car, you might need to broaden your search based on financing terms.

Having multiple clouds from different providers is costly, whether in the form of integration or disgruntled customers. These are just a few examples as to why a complete and unified cloud suite matters. I’m sure you can think of many more.

ADF LOV Auto Suggest Functionality

Andrejus Baranovski - Thu, 2016-11-10 10:34
ADF Faces LOV component can be easily enabled with auto suggest functionality. This is really powerful feature and should save time for user to search for required list values.

Auto suggest can be configured to be invoked only when certain number of characters is entered. I have configured Job ID list to trigger auto suggest, when two or more characters are entered. No auto suggest with single character:


Two or more characters entered - auto suggest displays filtered LOV values. What is nice about it - it automatically shows all these attributes included into LOV list:


In the background ADF BC executes SQL with bind variable from auto suggest:


LOV is defined on attribute in standard way, nothing special here:


To enable auto suggest behaviour, enough to drag and drop ADF af:autoSuggestBehavior tag into ADF Faces LOV component. Point suggestItems property to LOV binding suggestItems expression. Number of characters when to start auto suggest search is configure through minChars property:


Download sample application - ADFLovAutoSuggestApp.zip

Employee Enablement with Oracle PaaS for SaaS – Human Resources

WebCenter Team - Thu, 2016-11-10 09:05

Authored by: Mitchell Palski, Fusion Middlware Specialist

The employee experience of your organization is more transparent than ever. If Public Sector agencies are expected to compete with Commercial employers, they have to make strong first impressions on their new-hires. By improving the employee’s initial experience, your Human Resources department can have an immediate influence on employee productivity and retention.

What do employees care about?

I wrote a blog post back in April 2016 called “Recruit Millennials with Innovative Oracle Cloud Projects” highlighting the fact that the younger generations of the American workforce are attracted to innovative projects that give them the chance to get visibility for their achievements. In a 2015 NY Times article, Swarthmore Professor Barry Schwartz writes:

“But most important, we need to emphasize the ways in which an employee’s work makes other people’s lives at least a little bit better (and, of course, to make sure that it actually does make people’s lives a little bit better).”

More so than commercial industries, the social impact of a Government job has to be evident and visible in the outcomes of every day responsibility. The perennial value of our work is that it supposed to be a dedication to the citizen. The more often your employees are reminded of the imperishable effects they are having on your organization’s beneficiaries, the more motivated they will be to succeed.

There is an important accompaniment to “visibility”, which is of course recognition and reward. Tony Scwartz – Chief Executive of The Energy Project – worked with The Harvard Business Review to conduct a study of +20,000 employees across the globe to inspect which components contribute to a retentive workplace (among other things:

“Feeling treated with respect made employees feel 55 percent more engaged and 110 percent more likely to stay at the company. Employees who felt the most recognized and appreciated were 100 percent more likely to stay with their organizations.”

Think about how you manage your employees today – as “Human Capital”. Sounds more like an allocation of your investment portfolio than an actual person, doesn’t it? Well, it goes two ways. You are invested in your employees and they are invested in you. In this article, I hope to expose some technology-driven strategies to strengthen and lengthen these integral connections.

What can we do?

Natively delivered out-of-the-box, Oracle Human Capital Management (HCM) Cloud Service delivers HR functionality such as:
  • Prospective talent management
  • Workforce management
  • Workforce compensation and payroll
  • Time and Labor, absence management
  • Employee benefits and wellness
  • Social media monitoring
  • Real-time and predictive analytics
All of these capabilities are critical to your business, but software has an inborn influence on how employees are viewed. To effectively manage our staff, we use Employee ID numbers; we refer to employees as “resources”; we quantify, track and analyze performance. My intent isn’t to discount the importance of these practices, but rather to supplement them with intangibles.
  • Formalize innovation project review processes
  • Provide customized training paths
  • Establish leadership development plans
  • Start an Employee of the Month program
  • Encourage social collaboration

Oracle HCM is a wonderful foundation for managing the business of managing your people and has a wealth of data that can be extremely helpful in structuring innovative programs for employees. 

How do we do it?

Oracle’s Platform as a Service (PaaS) is a comprehensive, integrated portfolio of platform services that allow your organization to innovate new services faster and more productively at a lower cost. Using the Oracle PaaS services, technical users can create extensions of Oracle HCM.

HCM1 exposes business objects and processes to other applications through the use open standards-based technologies, including:
  • Extensible Markup Language (XML)
  • Simple Object Access Protocol (SOAP)
  • Business Process Execution Language (BPEL)
  • Web Services Description Language (WSDL)
  • XML schema definitions (XSD)

Take a look through the documentation and you will see how Oracle’s Fusion Applications are built with the purpose of being extended into your greater Enterprise. Oracle PaaS is making the development of application extensions increasingly easier, so it’s actually more important to understand the functional components of HCM than to have a highly technical staff. Platform tools allow your organization to focus on the processes, rules, and data model that you’ve constructed in your HCM system, rather than the technical underpinnings that will ultimately be relied on by your new services.

When it’s time to get technical and start building add-ons to your HCM, you can start by referencing the REST API that is available in the Fusion Application Documentation. The documentation even includes sample payloads to help speed up your development. From this point, you have the flexibility to either:
A) Leverage that REST API or
B) Transform that REST API into a SOAP service to be consumed by other applications

In cases like these, you can use the REST API to start developing right away:

  • Use Oracle Java Cloud Service (JCS) to build office-specific “microsites” that help deliver your users’ need-to-know updates and information
  • Connect to Mobile Cloud Service (MCS) to build a mobile app for traveling employees to submit Travel requests and Expense Reports
  • Leverage Process Cloud Service (PCS) for a custom application that handles HR policy management and approvals
  • Oracle Document Cloud Service (ODCS) includes Sites Cloud Service (SCS) for document collaboration workspaces
I don’t have the word count to go into all of the possibilities that Oracle PaaS can afford your organization, but I hope you walk away from this article with a kernel of knowledge. You are not limited to what comes out-of-the-box with any Oracle Fusion Application, including Oracle’s HCM Cloud. Your organization can enable seamless content delivery, social collaboration, and personalized LOB applications, without ever customizing your Oracle SaaS application.
Remember, there is a purpose to all of this. HCM tools are perfect for managing your personnel and resources but you manage people. Inculcate your people with the virtues of your public service and build an interface that highlights their alignment to those virtues. Oracle Cloud Services will allow you to easily and inexpensively build the interfaces to do exactly that.

1Access Oracle Enterprise Repository for Oracle Fusion Applications to find detailed information about integration assets, such as web services.

Shawnee State University Chooses Complete Oracle Cloud Suite to Improve Recruitment and Operations

Oracle Press Releases - Thu, 2016-11-10 07:00
Press Release
Shawnee State University Chooses Complete Oracle Cloud Suite to Improve Recruitment and Operations First state university in Ohio selects Oracle Cloud solutions to improve retention and streamline business practices

Redwood Shores, Calif.—Nov 10, 2016

Oracle announced today that Shawnee State University is implementing the full Oracle Cloud Suite including Oracle’s Enterprise Resource Planning (ERP), Enterprise Performance Management (EPM) and Human Capital Management (HCM) Cloud), in addition to Student Cloud CX. The university also anticipates moving to the Student Management Cloud as part of the university’s efforts to prepare their students for future success using the latest technology. This adoption comes with a recent Ohio Governor mandate to lower the cost of education, which prompted the university to search for the best technology to modernize and optimize internal operations.

With a student and staff body of 3,800 students and 700 employees, Shawnee State seeks to be a leader in paving the way for universities of all sizes to transition to cloud services. Their previous IT department was too small to handle the level of needs for their constituents. However, through the implementation of Oracle’s Cloud solution, the university will be utilizing a single cloud solution that helps manage recruitment processes to seek out best-fit students; this also enables them to focus on serving their constituents in a timelier manner.

While previously using an on-premise service, ensuring that recruiters reached the right students, and that those prospects ultimately chose to attend the university, was a big problem that needed to be solved. Shawnee State was seeking a service that focused on the entire student lifecycle from student success to finances – all while providing a single cloud experience from a global perspective.

“We believe that implementing Oracle’s Cloud solutions will help us meet our priority of being a student-focused university, by leveraging the latest technology all under a single unified cloud experience,” said Elinda Boyles, vice president for finance and administration at Shawnee State University. “We are fortunate that Shawnee State University President, Rick Kurtz, has forward-thinking vision and commitment to the university’s long-term success. This effort will improve operational efficiencies and better align our IT resources,” she said.

Because Oracle seeks to ensure their customers have a seamless transition to the cloud, Shawnee State was able to easily integrate the modern technology with legacy systems. Shawnee State is live with Oracle’s Student Cloud CX, and are implementing the remainder of the suite now with an expected go live of June 30, 2017.

“Oracle’s commitment to Higher Education and proven expertise in the Cloud provide Shawnee State University with a modern, secure and scalable platform for growth,” said Rondy Ng, Oracle’s Senior Vice President, Applications Development. “Our next-generation cloud platform for Higher Education incorporates embedded analytics, an intuitive user experience, and built-in collaboration functionality to deliver comprehensive support for the business and student lifecycle with a focus on putting the student first.”

Oracle delivers the industry’s broadest suite of enterprise-grade Cloud services, including Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), and Data as a Service (DaaS).

Contact Info
Jennifer Yamamoto
Oracle
+1.916.761.9555
jennifer.yamamoto@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

About Shawnee State University

Shawnee State is a student-focused public university offering a highly personalized, affordable, and accessible education dedicated to the exploration of emerging technologies and emerging ideas. Founded in 1986, and located along the Ohio River in historic Portsmouth, Ohio, the university offers a range of signature programs, including Game Design, which has been nationally ranked for six years in a row.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Jennifer Yamamoto

  • +1.916.761.9555

Trigger not Dropped from Recyclebin

Tom Kyte - Thu, 2016-11-10 06:06
Hi, On one of our 11g databases we are not able to purge an object from the recyclebin: sys@DEVDB11:SQL> select count(*) from dba_recyclebin; COUNT(*) -------------- 1320 sys@DEVDB11:SQL> purge dba_recyclebin; DBA Rec...
Categories: DBA Blogs

Using with clause and double function in where clause causes non joined rows to be processed

Tom Kyte - Thu, 2016-11-10 06:06
Hi, I seem to have stumbled upon Oracle behaviour I can not explain, so I've come to the magical place called Ask Tom for guidance. I have created the following example to make it as short as possible while still reproducing my problem: 2 tabl...
Categories: DBA Blogs

Partition an Exisiting Table

Tom Kyte - Thu, 2016-11-10 06:06
I am trying to partition an existing table (Range partition) by a date column (BEGTIME). Currently the table is 7.5 GB in size. Our business moves monthly and wanted to partition into months. Also table holds data pertaining to forecasts up to Dec 21...
Categories: DBA Blogs

Fast Generation of CSV and JSON from Oracle Database

Christopher Jones - Thu, 2016-11-10 01:38

Want a fast way to spool CSV or JSON from Oracle Database? Check out the new SQL*Plus 12.2 SET MARKUP CSV option and the new Oracle Database 12.2 JSON_OBJECT operator.

You can try Oracle Database 12.2 now using Oracle Cloud Database Service.

Fast, Easy CSV with SQL*Plus and Oracle Database

First, let's see CSV ("comma separated values") output in SQL*Plus 12.2. Start with this script, t.sql:

  set feedback off

  select department_id, department_name from departments where department_id < 110;
  exit

Executed traditionally you get formatted output:

  SQL> @t.sql

  DEPARTMENT_ID DEPARTMENT_NAME
  ------------- ------------------------------
	     10 Administration
	     20 Marketing
	     30 Purchasing
	     40 Human Resources
	     50 Shipping
	     60 IT
	     70 Public Relations
	     80 Sales
	     90 Executive
	    100 Finance

Running it with the new CSV mode:

  SQL> set markup csv on
  SQL> @t.sql

  "DEPARTMENT_ID","DEPARTMENT_NAME"
  10,"Administration"
  20,"Marketing"
  30,"Purchasing"
  40,"Human Resources"
  50,"Shipping"
  60,"IT"
  70,"Public Relations"
  80,"Sales"
  90,"Executive"
  100,"Finance"

Simple!

The full CSV syntax is:

  SET MARKUP CSV {ON|OFF} [DELIMI[TER] character] [QUOTE {ON|OFF}]

You can see the delimiter can be changed from a comma, and quoting of fields can be disabled.

The SET MARKUP option can also be enabled from the command line with the -m option:

  $ sqlplus -s -m 'csv on' cj@pdb1 @t.sql

  "DEPARTMENT_ID","DEPARTMENT_NAME"
  10,"Administration"
  20,"Marketing"
  30,"Purchasing"
  40,"Human Resources"
  50,"Shipping"
  60,"IT"
  70,"Public Relations"
  80,"Sales"
  90,"Executive"
  100,"Finance"
(Pro tip: the -s silent option may hide the password prompt on some platforms making SQL*Plus appear to hang! Enter the password and all will be well.)

CSV mode bypasses the traditional SQL*Plus pagination routines, making output faster to generate. And using the -m 'csv on' option additionally invokes a faster I/O subsystem, and also automatically sets some SET options to the values of another new option sqlplus -f. Two of the changes are increasing SET ARRAYSIZE to 100 and SET ROWPRETCH to 2.

Increasing SET ARRAYSIZE allows larger amounts of data to be returned in each low level request to the database, thus improving overall efficiency. Having SET ROWPRETCH to 2 reduces the database 'round trips' required between SQL*Plus and the database for queries that return only single rows. You should adjust the values of these, and other, settings to suit your data size and performance goals.

CSV mode can be used when connecting SQL*Plus 12.2 to Oracle Database 12.2 or earlier.

Fast, Easy JSON with SQL*Plus and Oracle Database

The Oracle Database 12.2 JSON_OBJECT function is a great way to convert relational table output into JSON.

Combined with SQL*Plus 12.2's efficient CSV output - and with quoting of columns disabled - you can spool JSON very easily.

Here's a SQL*Plus script t2.sql to return JSON output:

  set heading off
  set feedback off

  select json_object ('deptId' is d.department_id, 'name' is d.department_name) department 
  from departments d 
  where department_id < 110;

  exit

Running it generates the desired JSON output:

  $ sqlplus -s -m 'csv on quote off' cj@pdb1 @t2.sql
  {"deptId":10,"name":"Administration"}
  {"deptId":20,"name":"Marketing"}
  {"deptId":30,"name":"Purchasing"}
  {"deptId":40,"name":"Human Resources"}
  {"deptId":50,"name":"Shipping"}
  {"deptId":60,"name":"IT"}
  {"deptId":70,"name":"Public Relations"}
  {"deptId":80,"name":"Sales"}
  {"deptId":90,"name":"Executive"}
  {"deptId":100,"name":"Finance"}
Summary

SQL*Plus 12.2 and Oracle Database 12.2 allow efficient and easy access to CSV and JSON data. If you don't yet have 12.2, or you want to create web services and do more with the data, check out Oracle ORDS or roll your own solution using the node-oracledb driver for Node.js.

Documentum story – Disable PDF automatic generation but allow jpeg

Yann Neuhaus - Thu, 2016-11-10 00:00

We had a request by a customer where he wanted the pdf automatic rendition to be disabled but keep allowing the jpeg renditions to be used as thumbnails for D2. The pdf is generated by an eternal tool so it wasn’t needed here. Here is what we did:

Login to the ADTS server and edit the following file:
<CTS_HOME>\config\CTSServerService.xml

For the repository named “DOCBASE1″, comment the QueueProcessorContext tag as follow and only if it include the dm_autorender_win31 value:

<!-- Start of comment to be added
<QueueProcessorContext DocbaseName="DOCBASE1">
    <CTSServer AttributeName="queueItemName" AttributeValue="dm_autorender_win31"/>
    ...
</QueueProcessorContext>
       End of comment to be added -->

Then restart the services

After the restart of the Content Transformation Services, wait for a few minutes and open the CTS log file <CTS_HOME>\logs\CTS_log.txt.

On the CTS log file:
– Ensure that there are no important errors and/or exceptions
– Search for the message “Product ADTS version <CTS_VERSION> is installed on repository: DOCBASE1″

Open the file <CTS_HOME>\logs\Polling_log.txt in order to verify that the changes have been applied successfully.

In this file, find all “DOCBASE1″ related CTSLegacyQueueProcessor log entries by searching for the following text:

DOCBASE1 CTSLegacyQueueProcessor

There should not be any matching text in the log file.

 

Note that you can use the jpeg renditions in the D2 4.5 Preview widget in order to display a kind of preview directly in the D2 interface.

 

Cet article Documentum story – Disable PDF automatic generation but allow jpeg est apparu en premier sur Blog dbi services.

Yes, Storage Arrays Can Deduplicate Oracle Database. Here Is Exactly Why It Doesn’t Matter!

Kevin Closson - Wed, 2016-11-09 23:14

I recently had some cycles on a freshly installed Dell EMC XtremIO Storage Array. I took this opportunity to prepare a blog entry about the never-ending topic of whether or not storage arrays are able to reduce physical data capacity through deduplication of blocks in Oracle Database.

Of Course There Is Duplicate Data In Oracle Datafiles

Before I continue, let me say something that may come as a surprise to you. Yes, Oracle Database has duplicate blocks in tablespaces! Yes, modern storage arrays can achieve astonishing data reduction rates through deduplication–even when the only data in the array is Oracle Database (whether ASM or file systems)!

XtremIO computes and displays global data reduction rate. This makes it a bit more difficult to show the effect of deduplication on Oracle Database because averages across diverse data makes pin-point focus impossible. However, as I was saying, I took some time on a freshly-installed XtremIO array and collected what I hope will be interesting information on the topic of deduplication.

Please take a look at Figure 1. To start the testing I created a 4TB XtremIO volume, attached it as a LUN to a test host and then created an XFS file system on it. Please be aware that the contents of an Oracle datafile is precisely the same whether stored in ASM or in a file system file. After the file system was created I used the SLOB database creation kit (SLOB/misc/create_database_kit) to create a small database with Oracle Database 12c. As Figure 1 shows, the small database consumed 11.83GB of logical space in the 4TB volume. However, since the data enjoyed a slight deduplication ratio of 1.1:1 and a healthy compression ratio of 3.3:1 for a 3.6:1 data reduction ratio, only 3.27GB physical space was consumed in the array.

markup-gui-after-slob-database-created

Figure 1

The next step in the testing was to consume the majority of the 4TB file system with a BIGFILE tablespace. Figure 2 shows the DDL I used to create the tablespace.

shell-create-3900g-ts

Figure 2

Figure 3 shows the file system file that corresponds to the tablespace created with DDL in Figure 2.

sqlplus-bigfile-info-and-host-to-ls

Figure 3

After creating the 3.9TB BIGFILE tablespace I took a screenshot of the XtremIO GUI Dashboard. As Figure 4 shows, there was no deduplication! Instead, the data was compressed 4.0:1 resulting in only 977.66GB physical space being consumed in the array. So why in the world would I blog the opposite of what I said above? Why show the array did not, in fact, deduplicate the 3.9TB datafile? The answer is in the fact that I said there are duplicate data block in tablespaces. I didn’t say there are duplicate blocks in the same datafile!

gui-after-bigfile-cr

Figure 4

To return the array to the state prior to the BIGFILE tablespace creation, I dropped the tablespace (including contents and datafiles thus unlinking the file) and then used the Linux fstrim(8) command to return the space to the array as shown in Figure 5.

shell-fstrim-after-drop-bigts

Figure 5

Once the fstrim command completed I took another screenshot of the XtremIO GUI Dashboard as shown in Figure 6. Figure 6 shows that the array space utilization and data reduction had returned to that of what was seen before the BIGFILE tablespace creation.

gui-after-bigfile-drop-and-fstrim

Figure 6

OK, Now For The Duplicate Data

The next step in the testing was to fill up the majority of the 4TB file system with SMALLFILE tablespaces. To do so I created 121 tablespaces each consisting of a single SMALLFILE datafile of 32GB. The output shown in Figure 7 is from a data dictionary query to display the size of each of the 121 datafiles and how the sum of these datafiles consumed 3.87TB of the 4TB file system.

shell-after-creating-smalfile-tablespaces

Figure 7

That’s Duplicate Data

Once the file system was filled with SMALLFILE datafiles I took another screenshot of the XtremIO GUI Dashboard. Figure 8 shows that the SMALLFILE datafiles enjoyed a deduplication ratio 81.8:1 combined with a compression ratio of 3.8:1 resulting in a global data reduction rate of 306.9:1. Because of the significant data reduction rate only 12.68GB of physical space was consumed in the array in spite of the 3.79TB logical space (the sum of the SMALLFILE datafiles) being allocated.

gui-after-smallfile-cr

Figure 8

So here we have it! I had a database created with Oracle Database 12c that consisted of 121 32GB files for roughly 3.8TB database size yet XtremIO deduplicated the data down by a factor of 82:1!

So arrays can deduplicate Oracle Database contents! Right? Well, yes, but it matters none whatsoever. Allow me to explain.

Oracle datafiles consist of initialized blocks but vast portions of that initialized content is the same from file to file. This fact can be seen with simple md5sum(1) output. Consider Figure 9 where you can see the output of the md5sum command used to compute Oracle datafile checksums but only after skipping the first 8,692 blocks (8KB blocks). It’s the first approximate 68MB of each datafile that is unique when a datafile is freshly initialized. Beyond that threshold we can see (Figure 9) that the rest of the file content is identical.

markup-shell-with-md5sum

Figure 9

Thus far this blog post has proven that initialized, but empty, Oracle Database datafiles have duplicate data. As the title of this post says, however, it does not matter.

Introduce Application Data To The Mix

Figure 10 shows the commands I used to populate each of the 121 tablespaces with a single table. The table has the sparse characteristic we are all accustomed to with SLOB. That is, I am only creating a single row in each block. Moreover, I’m populating each of these 121 tables with the same application data! This is precisely why I say deduplication of Oracle Database doesn’t matter because it only holds true until any application data is loaded into the data blocks. Figure 10 shows this set of DDL commands.

shell-load_tablespaces-script

Figure 10

After populating the blocks in each of the 121 tables (each residing in a dedicated SMALLFILE tablespace) with blocks containing just a single row of application data I took another screenshot of the XtremIO GUI Dashboard. Figure 11 shows how putting any data into the data blocks reverts the deduplication. Why? Well, remember that the block header of every block has the SCN of the last change made to the block. For this reason I can put the same application data in blocks and still have 100% unique blocks–at least at the 8KB level.

Please note that the application table I used to populate the 121 tables does not consume 100% of the data blocks in each of the SMALLFILE tablespaces. There were a few blocks remaining in each tablespace and thus there remained a scant amount of deduplication as seen in Figure 11. Most XtremIO customers see some insignificant deduplication in their Oracle Database environments. Some even see significant deduplication–at least until they insert data into the database.

gui-after-filling-smallfiles

Figure 11

In a follow-up post I’ll say a few words about the deduplication granularity and how it affects the ability to achieve small amounts of deduplication of unused space in initialized data blocks. However, bear in mind that the net result of any deduplication of Oracle Database data files is that the only space that can be deduplicated is space that has never had application data in it. After all, a SQL DELETE command doesn’t remove data–it only marks it as free in the block.

Summary

I don’t think there are that many Oracle shops that have an urgent need for data reduction of space that’s never been used to store application data. I could be wrong. Until I find out either way, I say that yes you can see deduplication of Oracle Database datafiles but it doesn’t matter one bit.

 

 

 

 

 

 

 


Filed under: All Flash Array, oracle, XtremIO

Query regarding primary key , foreign key relations and inserts

Tom Kyte - Wed, 2016-11-09 11:46
Hi Tom, I have a two tables Table1 (id_pk number, name_fk varchar(10)) Table2 (name_pk varchar(10), id_fk number) id_pk : Primary key of table1 name_fk : foreign key referencing name_pk of Table2 name_pk : Primary key of table2 id_fk : Fo...
Categories: DBA Blogs

Complicated query to analyse utilization period

Tom Kyte - Wed, 2016-11-09 11:46
Dear Gents I have a query which is giving the available quantity in each location on specific dates: Location Tdate RunningQty CA02D003A 31-Jul-16 1152 CA02D003A 30-Sep-16 1092 CA02D003A 11-Oct-16 500 CA02D003A 13-Oct-16 0 CA02D003A 20-Oct...
Categories: DBA Blogs

Update one column in 58 millions records table

Tom Kyte - Wed, 2016-11-09 11:46
Hi Tom, Need suggestions to improve performance. Following are the methods we tried 1. Using <b>merge</b> <code>MERGE INTO /*+ PARALLEL(tbl_temp,8) */ tbl_temp tcm USING (SELECT frn.customer_id, frn.risk FROM temp_new frn ) a ON (a.id = tc...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator