Feed aggregator

The AppsLab’s Latest Inter-State Adventure: A Site Visit to Micros

Oracle AppsLab - Mon, 2016-07-18 16:17

Probably the best way to get to know your users is to watch them work, in their typical environment. That, and getting to talk to them right after observing them. It’s from that perspective that you can really see what works, what doesn’t, and what people don’t like. And this is exactly what we want to learn about in our quest to improve our users’ experience using Oracle software.

That said, we’ve been eager to get out and do some site visits, particularly for learning more about supply chain management (SCM). For one, SCM is an area most of us on the team haven’t spent too much time working on. But two, at least for me–working mostly in the abstract, or at least the virtual—there’s something fascinating and satisfying about how physical products and materials move throughout the world, starting as one thing and being manufactured or assembled into something else.

We had a contact at Micros, so we started there. Also, they’re an Oracle company, so that made it much easier. You’ve probably encountered Micros products, even if you haven’t noticed them—Micros does point of sales (POS) systems for retail and hospitality, meaning lots of restaurants, stadiums, and hotels.

Micros point-of-sales terminals throughout the years. This is in Micros's corporate office in Columbia, Maryland.

Micros point-of-sales terminals throughout the years. This is in Micros’s corporate office in Columbia, Maryland.

For this particular adventure, we teamed up with the SCM team within OAUX, and went to Hanover, Maryland, where Micros has its warehouse operations, and where all of its orders are put together and shipped out across the world.

We observed and talked to a variety of people there: the pickers, who grab all the pieces for an order; the shippers, who get the orders ready to ship out and load them on the trucks; receiving, who takes in all the new inventory; QA, who have to make sure incoming parts are OK, as well as items that are returned; and cycle counters, who count inventory on a nightly basis. We also spoke to various managers and people involved in the business end of things.

A view inside the Micros warehouse.

A view inside the Micros warehouse.

In addition to following along and interviewing different employees, the SCM team ran a focus group, and the AppsLab team ran something like a focus group, but which is called a User Journey Map. With this research method, you have users map out their tasks (using sticky notes, a UX researcher’s best friend), while also including associated thoughts and feelings corresponding to each step of each task. We don’t just want to know what users are doing or have to do, but how they feel about it, and the kinds of questions they may have.

In an age where we’re accustomed to pressing a button and having something we want delivered in two days (or less), it’s helpful on a personal level to see how this sort of thing actually happens, and all the people involved in the background. On a professional level, you see how software plays a role in all of it—keeping it all together, but also imposing limits on what can be done and what can be tracked.

This was my first site visit, though I hope there are plenty more in the future. There’s no substitute for this kind of direct observation, where you can also ask questions. You come back tired, but with lots of notes, and lots of new insights.Possibly Related Posts:

SQL Server AlwaysOn: new services packs and new diagnostic capabilities

Yann Neuhaus - Mon, 2016-07-18 13:24

As you certainly know, the SQL Server 2014 SP2 has been released by Microsoft with some interesting improvements that concern SQL Server AlwaysOn and availability groups feature. In fact, all of these improvements are also included into SQL Server 2012 SP3 and SQL Server 2016. Among all fixes and improvements that concern AlwaysOn, I would like to focus on those described in the Microsoft KB3173156 and KB3112363. But in this first blog post, let’s say that I will just cover the improvement about the lease timeout which is part of the AlwaysOn health model.

Did you already face lease timeout issue ? If yes, you have certainly notice dit is an good indicator of system wide problem and figure out what is the root cause could be a burden task because we missed diagnostic information and we had to correlate different performance metrics as well. Fortunately, the release of new service packs provide enhancements in this area.

Let’s take an example with a 100% CPU utilization scenario that leads to make the primary replica unresponsive and unable to respond to cluster isAlive() routine. This is typically a situation where we may face a lease timeout issue. After simulating this scenario on my lab environment,here what I found in the SQL Server error log from my primary replica. (I have voluntary filtered to include only the sample we want to focus on).

blog 101 - AG 2014 SP2 - lease timeout issue

Firstly, we may see different new messages related to lease timeout issues between the range interval 12:39:54 – 12:43:22. For example, the WSFC did not receive a process event signal from SQL Server within the lease timeout period or the lease between AG and the WSFC has expired. Diagnostic messages have been enhanced to give us a better understanding of the lease issue. But at this point we know we are facing lease timeout but we don’t know the root cause yet. Imrovements have also been extented to the cluster log in order to provide more insights to the system behavior at the moment of the lease timeout issue as we may see below:

00000644.00000768::2016/07/15-12:40:06.575 ERR   [RCM] rcm::RcmResource::HandleFailure: (TestGrp)

00000644.00000c84::2016/07/15-12:40:06.768 INFO [GEM] Node 2: Sending 1 messages as a batched GEM message

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] resource TestGrp: failure count: 0, restartAction: 0 persistentState: 1.

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] numDependents is zero, auto-returning true

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] Will queue immediate restart (500 milliseconds) of TestGrp after terminate is complete.

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] Res TestGrp: ProcessingFailure -> WaitingToTerminate( DelayRestartingResource )

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] TransitionToState(TestGrp) ProcessingFailure–>[WaitingToTerminate to DelayRestartingResource].

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] Res TestGrp: [WaitingToTerminate to DelayRestartingResource] -> Terminating( DelayRestartingResource )

00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] TransitionToState(TestGrp) [WaitingToTerminate to DelayRestartingResource]–>[Terminating to DelayRestartingResource].

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] Lease timeout detected, logging perf counter data collected so far

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] Date/Time, Processor time(%), Available memory(bytes), Avg disk read(secs), Avg disk write(secs)

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:24.0, 8.866394, 912523264.000000, 0.000450, 0.000904

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:34.0, 25.287347, 919531520.000000, 0.001000, 0.000594

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:44.0, 25.360508, 921534464.000000, 0.000000, 0.001408

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:55.0, 81.225454, 921903104.000000, 0.000513, 0.000640

00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:40:5.0, 100.000000, 922415104.000000, 0.002800, 0.002619

00000cc0.00001350::2016/07/15-12:40:12.452 INFO [RES] SQL Server Availability Group: [hadrag] Stopping Health Worker Thread


According to the SQL Server error log time range we may notice similar messages that concern the detection of lease timeout with some additional information that came from the perfmon counters (Concerned lines are underlined in the sample above). If we reformat the concerned portion into the table below we may get a better identification of our issue

Date/Time Processor time (%) Availability memory(bytes) Avg disk read(secs) Avg disk write(secs) 10:39:24.0 8.866394 912523264 912523264 0.000904 10:39:34.0 25.287347 919531520 0.001000 0.000594 10:39:44.0 25.360508 921534464 0.000000 0.001408 10:39:55.0 81.225454 921903104 0.000513 0.000640 10:40:5.0 100.000000 922415104 0.002800 0.002619


CPU utilization is what we must focus on here. So getting this valuable information directly to the cluster.log when we troubleshoot lease timeout issue will help us a lot. But just to clarify, this doesn’t mean that it was not possible with older versions but we have to retrieve them in a more complicated way (by using the AlwaysOn_health extended event for example).

Next, other improvements concern existing extended events like availability_group_lease_expired and hadr_ag_lease_renewal. The next picture points out new available fields like current_time, new_timeout and state as well.

blog 101 - AG 2014 SP2 - lease time out xe new fields

Let me show you their interest with another example. This time, I voluntary hang my sqlserver.exe process related to the primary replica in order to trigger an unresponsive lease scenario. I got interesting outputs from the extended event trace on both sides.

blog 101 - AG 2014 SP2 - lease time out xe test 2

From the former primary, there are no related records during the period of the SQL Server process responsiveness but we may see a record at 17:19:11. The lease renewal process fails and  we get a better picture of the problem by looking at the corresponding state (LeaseNotValid) followed by the availability_group_lease_expired event. Note that the current_time (time at which the lease expired) value is greater than the new_timeout (time out time, when availability_group_lease_expired is raised) value here – 3215765 > 3064484 – which confirms that we experienced a timeout issue in this case.

On the new primary, we may notice the start of the lease worker thread but until the concerned replica stabilizes the PRIMARY ONLINE state, it voluntary postpones the lease check process (materialized by StartedExcessLeaseSleep / ExcessSleepSucceeded state values).

In the next blog I will talk about improvements in the detection of the availability group replication latency.

Stay tuned!



Cet article SQL Server AlwaysOn: new services packs and new diagnostic capabilities est apparu en premier sur Blog dbi services.

What is the instance name?

Laurent Schneider - Mon, 2016-07-18 09:48

If your Oracle SID doesn’t match your instance name in init.ora, this is quite confusing.

Check my previous post, what is sid in oracle

In the instance_name column of the view v$instance, as well as in USERENV context, it matches the ORACLE_SID of the underlying operating system.

SQL> var ORACLE_SID varchar2(9)
SQL> set autoprint on
SQL> exec dbms_system.get_env('ORACLE_SID',:ORACLE_SID)
PL/SQL procedure successfully completed.
SQL> select sys_context('USERENV','INSTANCE_NAME') from dual;
SQL> select instance_name from v$instance;

This is not the same as the init.ora parameter

SQL> select name, value, description from v$parameter where name='instance_name';

------------- --------- ----------------------------------------
instance_name INS001    instance name supported by the instance

The instance_name doesn’t have to match anything. It’s of relevance if you use ADR. And you probably do. Background dump dest and family are deprecated now. In your ADR docu you’ll read


But this SID is actually your init.ora instance name. And not your ORACLE_SID.

Importing windows scheduled tasks into a Powershell object before 5.0

Matt Penny - Mon, 2016-07-18 07:43

You don’t need this on Powershell 5.0 and upwards because there’s a built-in cmdlet, but for previous versions:

convertfrom-csv $(schtasks /Query /S server1 /TN "run somesstuff" /V /FO CSV)

HostName : server1
TaskName : \run somesstuff
Next Run Time : N/A
Status : Ready
Logon Mode : Interactive only
Last Run Time : 13/07/2016 10:05:43
Last Result : 0
Author : matt
Task To Run : C:\powershell\Modules\somesstuff-PCs\run-somesstuff.bat
Start In : N/A
Comment : Scheduled job which does some stuff
Scheduled Task State :
Idle Time :
Power Management :
Run As User :
Delete Task If Not Rescheduled :
Stop Task If Runs X Hours and X Mins :
Schedule :
Schedule Type :
Start Time :
Start Date :
End Date :
Days :
Months :
Repeat: Every :
Repeat: Until: Time :
Repeat: Until: Duration :
Repeat: Stop If Still Running :

HostName : More detail at http://ourwebsite
TaskName : Enabled
Next Run Time : Disabled
Status : Stop On Battery Mode, No Start On Batteries
Logon Mode : matt
Last Run Time : Enabled
Last Result : 72:00:00
Author : Scheduling data is not available in this format.
Task To Run : One Time Only
Start In : 10:20:21
Comment : 25/05/2016
Scheduled Task State : N/A
Idle Time : N/A
Power Management : N/A
Run As User : Disabled
Delete Task If Not Rescheduled : Disabled
Stop Task If Runs X Hours and X Mins : Disabled
Schedule : Disabled
Schedule Type :
Start Time :
Start Date :
End Date :
Days :
Months :
Repeat: Every :
Repeat: Until: Time :
Repeat: Until: Duration :
Repeat: Stop If Still Running :

This is outputting from schtasks in csv format, then importing that into a PowerShell object.

Categories: DBA Blogs

How Can You Create A Column With AUTO_INCREMENT in Oracle SQL?

Complete IT Professional - Mon, 2016-07-18 06:00
If you’ve used MySQL, you’re probably familiar with the AUTO_INCREMENT feature. But it’s not available in Oracle. Read this article to find out how you can auto increment a column in Oracle SQL. What Is Auto Increment? An auto increment column, or an identity column in other databases, is a column that has its value […]
Categories: Development

PeopleSoft Security User Authorization Audits

When performing a PeopleSoft security audit, reviewing what rights and privileges individual users have been granted for system and application security privileges (authorization) is one of the key deliverables. The following are several of the topics that Integrigy investigates during our PeopleSoft security configuration assessments - take a look today at your settings:

Review users with access to

  • PeopleTools
  • The SQR folder
  • Process scheduler
  • Security and other sensitive administration menus
  • Security and other sensitive administration roles
  • Web profiles
  • PeopleSoft Administrator Role
  • Correction mode

To check access to PeopleTools, use the following. If you need assistance with the other topics, let us know –

-- Access to PeopleTools



If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP


PeopleSoft Database Security

PeopleSoft Security Quick Reference

Auditing, Oracle PeopleSoft
Categories: APPS Blogs, Security Blogs

Connecting Oracle Data Visualization Desktop to OBIEE

Rittman Mead Consulting - Mon, 2016-07-18 04:00
Connecting Oracle Data Visualization Desktop to OBIEE

Recently at Rittman Mead we have been asked a lot of questions surrounding Oracle’s new Data Visualization Desktop tool and how it integrates with OBIEE. Rather than referring people to the Oracle docs on DVD, I decided to share with you my experience connecting to an OBIEE 12c instance and take you through some of the things I learned through the process.

In a previous blog, I went though database connections with Data Visualization Desktop and how to create reports using data pulled directly from the database. Connecting to DVD to OBIEE is largely the same process, but allows the user to pull in data at pre-existing report level. I decided to use our 12c ChitChat demo server as the OBIEE source and created some sample reports in answers to test out with DVD.

From the DVD Data Sources page, clicking "Create New Data Source" brings up a selection pane with the option to select “From Oracle Applications.”

Connecting Oracle Data Visualization Desktop to OBIEE

Clicking this option brings up a connection screen with options to enter a connection name, URL (location of the reports you want to pull in as a source), username, and password respectively. This seems like a pretty straightforward process. Reading the Oracle docs on connectivity to OBIEE with DVD say to navigate to the web catalog, select the folder containing the analysis you want to use as a source, and then copy and paste the URL from your browser into the URL connection in DVD. However, using this method will cause the connection to fail.

Connecting Oracle Data Visualization Desktop to OBIEE

To get Data Visualization Desktop to connect properly, you have to use the URL that you would normally use to log into OBIEE analytics with the proper username and password.

Connecting Oracle Data Visualization Desktop to OBIEE

Once connected, the web catalog folders are displayed.

Connecting Oracle Data Visualization Desktop to OBIEE

From here, you can navigate to the analyses you want to use for data sources.

Connecting Oracle Data Visualization Desktop to OBIEE

Selecting the analysis you want to use as your data source is the same process as selecting schemas and tables from a database source. Once the selection is made, a new screen is displayed with all of the tables and columns that were used for the analysis within OBIEE.

Connecting Oracle Data Visualization Desktop to OBIEE

From here you can specify each column as an attribute or measure column and change the aggregation for your measures to something other than what was imported with the analysis.

Clicking "Add to Project" loads all the data into DVD under Data Elements and is displayed on the right hand side just like subject area contents in OBIEE.

Connecting Oracle Data Visualization Desktop to OBIEE

The objective of pulling data in from existing analyses is described by Oracle as revisualization. Keep in mind that Data Visualization Desktop is meant to be a discovery tool and not so much a day-to-day report generator.

The original report was a pivot table with Revenue and Order information for geographical, product and time series dimensions. Let’s say that I just wanted to look at the revenue for all cities located in the Americas by a specific brand for the year 2012.

Dragging in the appropriate columns and adding filters took seconds and the data loaded almost instantaneously. I changed the view to horizontal bar and added a desc sort to Revenue and this was my result:

Connecting Oracle Data Visualization Desktop to OBIEE

Notice how the revenue for San Fransisco is much higher than any of the other states. Let’s say I want to get a closer look at all the other states without seeing the revenue data for San Fransisco. I could create a new filter for City and exclude San Fransisco from the list or I could just create a filter range for Revenue. Choosing the latter gave me the option of moving a slider to change my revenue value distribution and showed me the results in real time. Pretty cool, right?

Connecting Oracle Data Visualization Desktop to OBIEE

Connecting Oracle Data Visualization Desktop to OBIEE

Taking one report and loading it in can open up a wide range of data discovery opportunities but what if there are multiple reports I want to pull data from? You can do this and combine the data together in DVD as long as the two reports contain columns to join the two together.

Going back to my OBIEE connection, there are two reports I created on the demo server that both contain customer data.

Connecting Oracle Data Visualization Desktop to OBIEE

By pulling in both the Customer Information and Number of Customer Orders Per Year report, Data Visualization Desktop creates two separate data sources which show up under Data Elements.

Connecting Oracle Data Visualization Desktop to OBIEE

Inspecting one of the data sources shows the match between the two is made on both Customer Number and Customer Name columns.

Connecting Oracle Data Visualization Desktop to OBIEE

Note: It is possible to make your own column matches manually using the Add Another Match feature.

By using two data sets from two different reports, you can blend the data together to discover trends, show outliers and view the data together without touching the database or having to create new reports within OBIEE.

Connecting Oracle Data Visualization Desktop to OBIEE

The ability to connect directly to OBIEE with Data Visualization Desktop and pull in data from individual analyses is a very powerful feature that makes DVD’s that much greater. Combining data from multiple analyses blend them together internally creates some exciting data discovery possibilities for users with existing OBIEE implementations.

Categories: BI & Warehousing

Document No. Repeat

Tom Kyte - Mon, 2016-07-18 01:26
Categories: DBA Blogs

Hash partitioning and number of partition

Tom Kyte - Mon, 2016-07-18 01:26
Hi Tom, I have a question regarding the choice of the number of partitions when using hash partitioning. I have the following table used to store documents <code>CREATE TABLE "DOCUMENT" ( "DOCUMENT_ID" NUMBER(38,0), "CREATED_BY" VARCHA...
Categories: DBA Blogs

Explanation of Hints

Tom Kyte - Mon, 2016-07-18 01:26
Hi Tom, 1)first up all great thanks to you for spending time for me explain about following hints Parallel append index no_index first_rows and all_rows with an example in which situation we may use what hint in select the data from table? ...
Categories: DBA Blogs

Member procedures & Member function in plsql i.e member sub programs in oracle(plsql)

Tom Kyte - Mon, 2016-07-18 01:26
Hi Tom, 1)why we are using,what is purpose of Member procedures & member functions in plsql subprograms explain an example? 2)Difference between stored subprograms and member subprograms?
Categories: DBA Blogs

How to set all elements of collections

Tom Kyte - Mon, 2016-07-18 01:26
I've three questions 1) Suppose I have a collection of numbers say 300 values & i want to initialise with number 72 what would be the best way to do so ,rather than writing 300 times 2) I've to generate reports in pdf so what would you sugg...
Categories: DBA Blogs

Will open cursor hold up more tablespace when it is not closed?

Tom Kyte - Mon, 2016-07-18 01:26
Hi Tom, I am using oracle 11g and the tablespaces keeps growing, I have recently identified a issue with open cursors which is not closed when the session was closed. Will open cursor eats up all the spaces which leads to consume more temp spa...
Categories: DBA Blogs

ORA-01578: ORACLE data block corrupted (file # , block # )

Tom Kyte - Mon, 2016-07-18 01:26
Hi Tom, I m getting following error in my production data base alert log since last 5 month. Errors in file /u01/app/oracle/diag/rdbms/prod/prod/trace/prod_j000_26775.trc: ORA-01578: ORACLE data block corrupted (file # , block # ) ORA-01578: O...
Categories: DBA Blogs

ORA - 08103:Object No Longer Exists

Tom Kyte - Mon, 2016-07-18 01:26
Hi, I am getting an error Ora-08103: Object No longer exists when select queries are fired against a partitioned table having local Bit map indexes. But when the same process/job (which fires select query) is restarted the issue does not come up a...
Categories: DBA Blogs

Oracle Joins the White House’s Advanced Wireless Research Initiative

Oracle Press Releases - Sun, 2016-07-17 23:40
Press Release
Oracle Joins the White House’s Advanced Wireless Research Initiative Bringing the Power of the Cloud to 5G and Beyond

Redwood Shores, Calif.—Jul 18, 2016

Oracle is proud to provide Oracle Cloud technology and engineering resources to the White House Office of Science and Technology Policy’s program Platforms Enabling Advanced Wireless Program (PAWR). The program is led by the National Science Foundation, the nonprofit organization US Ignite, and a consortium of industry and academic leaders collaborating to better understand the unique challenges and opportunities created by next generation platforms for networking.

Oracle Communications will provide core network control, analytics and network orchestration technology to researchers and help them understand the impact of subscriber behavior, enhance orchestration, and bolster security. Oracle's contributions, in this groundbreaking initiative, will aid the advancement in wireless technology areas by:

  • Discovering how applied analytics can help minimize negative impacts on orchestration, and improve overall network and service performance;
  • Monitoring and measuring networks in the new environment to provide optimal performance and reliability;
  • Analyzing capacity in a virtual network, making resources available (such as hardware and licenses) when needed;
  • Identifying new formulas and metrics to engineer and secure cloud-based telecom networks;
  • Determining what impact subscriber behaviors and events have on network/service orchestration.

The research and development from Oracle will assist in the understanding of protecting from network abuse through legitimate network connections to ensure even ‘trusted’ networks cannot abuse their access. This will include analyzing the impact from other networks through misconfigurations or malformed packets. Additionally, our contributions will help set up standards, procedures and principles for the Telecommunications cloud.

“We see an opportunity to bring the power and flexibility of the cloud to telecommunications,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “As a means to better understand the impact of subscriber behaviors to traffic engineering, how analytics can enhance orchestration at the network and service levels, and how to bolster security of the control plane to protect against malicious behavior.”

Oracle is proud to join as a founding board member of PAWR, an organization steering the research agenda and policy issues for US Ignite, responsible for the design, scope, and research goals for its members.

Contact Info
Katie Barron
+1 202.904.1138
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Katie Barron

  • +1 202.904.1138

Blast from the Past: Gesture-Controlled Robot Arm

Oracle AppsLab - Sun, 2016-07-17 18:31

Hard to believe it’s been nearly three years since we debuted the Leap Motion-controlled robot arm. Since then, it’s been a mainstay demo for us, combining a bit of fun with the still-emergent interaction mechanism, gesture.

Anthony (@anthonyslai) remains the master of the robot arm, and since we lost access to the original video, Noel (@noelportugal) shot a new one in the Gadget Lab at HQ where the robot arm continues to entertain visitors.

Interesting note, Amazon showed a very similar demo when they debuted AWS IoT. We nerds love robots.

We continue to investigate gesture as an interaction; in addition to our work with the Leap Motion as a robot arm controller and as a feature in the Smart Office, we’ve also used the Myo armband to drive Anki race cars, a project Thalmic Labs featured on their developer blog.

Gesture remains a Wild West, with no standards and different implementations, but we think there’s something to it. And we’ll keep investigating and having some fun while we do.

Stay tuned.Possibly Related Posts:

Workaround for ADF BC REST Custom Method Configuration

Andrejus Baranovski - Sun, 2016-07-17 13:34
This post is based on JDEV, it seems like there is issue with ADF BC REST custom method definition in this release. I'm glad it is not runtime issue, but related to design time JDEV wizard incorrect functionality. I will explain how to bypass it, when you want to expose custom REST method in

Sample application (ADFBCRestApp_v8.zip) implements custom method, exposed through ADF BC REST - calculateEmployees. This method is created in VO Implementation class and it accepts two parameters - firstName and lastName. Method works correctly, I can execute it through POST, by passing predefined payload with method name and parameters (read more in developer guide - 22.12.5 Executing a Custom Action):

Make sure not to forget to specify Content-Type, otherwise POST request to ADF BC REST will fail:

Let's see custom method implementation and where workaround is required. Custom method is using View Criteria to filter VO and return estimated row count. All fine here:

Method should be exposed through VO client interface:

We should generate custo method binding registry in REST resource custom methods section (client interface). In JDEV 12.2.1 this works by clicking checkbox for Enable, but in JDEV the same throws error (can't enable custom method to be called through REST):

Luckily there is a workaround. We can define method binding manually, go to source mode in REST resource definition dialog and add methodAction for custom method. You can replace method name, ID, instance name, etc. REST resource definition looks very similar to page definition file we are using to define bindings available for ADF Faces. ADF BC REST interface seems to be designed on common principles with ADF bindings, at least from definition point of view:

The Oracle Data Dictionary – Keeping an eye on your application in uncertain times

The Anti-Kyte - Sun, 2016-07-17 13:15

I’ve got to say that it’s no surprise that were leaving Europe. It’s just that we expected it to be on penalties, probably to Germany.
Obviously, that “we” in the last gag is England. Wales and Northern Ireland have shown no sense of decorum and continued to antagonise our European Partners by beating them at football.
Currently, the national mood seems to be that of a naughty child who stuck their fingers in the light socket to see what would happen, and were awfully surprised when it did.

In the midst of all this uncertainty, I’ve decided to seek comfort in the reassuringly familiar.
Step forward the Oracle Data Dictionary – Oracle’s implementation of the Database Catalog.

However closely you follow the Thick Database Paradigm, the Data Dictionary will serve as the Swiss Army Knife in your toolkit for ensuring Maintainability.
I’ll start of with a quick (re)introduction of the Data Dictionary and how to search it using the DICTIONARY view.
Then I’ll cover just some of the ways in which the Data Dictionary can help you to get stones out of horses hooves keep your application healthy.

Right then….

What’s in the Data Dictionary ?

The answer is, essentially, metadata about any objects you have in your database down to and including source code for any stored program units.
Data Dictionary views tend to come in three flavours :

  • USER_ – anything owned by the currently connected user
  • ALL_ – anything in USER_ plus anything the current user has access to
  • DBA_ – anything in the current database

The Data Dictionary has quite a lot of stuff in it, as you can tell by running this query :

select count(*)
from dictionary

You can sift through this mountain of information by having a look at the comments available in DICTIONARY (DICT to it’s friends) for each of the Views listed.
For example…

select comments
from dict
where table_name = 'USER_TABLES'

Description of the user's own relational tables

You can see a graphical representation of these USER_ views in whatever Oracle IDE you happen to be using.
For example, in SQLDeveloper…


This graphical tree view corresponds roughly to the following Data Dictionary views :

View Name DICT Comments Additional Comments USER_TABLES Description of the user’s own relational tables USER_VIEWS Description of the user’s own views USER_EDITIONING_VIEWS Descriptions of the user’s own Editioning Views USER_INDEXES Description of the user’s own indexes USER_OBJECTS Objects owned by the user This includes functions, packages, procedures etc USER_QUEUES All queues owned by the user ALL_QUEUE_TABLES All queue tables accessible to the user USER_TRIGGERS Triggers having FOLLOWS or PRECEDES ordering owned by the user Includes Cross Edition Triggers USER_TYPES Description of the user’s own types USER_MVIEW_LOGS All materialized view logs owned by the user USER_SEQUENCES Description of the user’s own SEQUENCEs USER_SYNONYMS The user’s private synonyms ALL_SYNONYMS All synonyms for base objects accessible to the user and session Includes PUBLIC synonyms USER_DB_LINKS Database links owned by the user ALL_DB_LINKS Database links accessible to the user ALL_DIRECTORIES Description of all directories accessible to the user ALL_EDITIONS Describes all editions in the database USER_XML_SCHEMAS Description of XML Schemas registered by the user USER_SCHEDULER_JOBS All scheduler jobs in the database RESOURCE_VIEW Whilst not part of the DICTIONARY per se, you can see details of XML DB Schema in this view USER_RECYCLEBIN User view of his recyclebin ALL_USERS Information about all users of the database

As all of this metadata is available in views, it can be interrogated programatically via SQL, as we’ll discover shortly. Before that though, let’s introduce…

The Brexit Schema

To add an element of topicality, the following examples will be based on this schema.

The user creation script looks like this :

grant connect, create table, create procedure, create sequence
    to brexit identified by ceul8r

alter user brexit default tablespace users
alter user brexit quota unlimited on users

You’ll probably want to choose your own (weak) pun-based password.

The tables in this schema are ( initially at least)…

create table countries
    iso_code varchar2(3),
    coun_name varchar2(100) not null,
    curr_code varchar2(3) not null,
    is_eu_flag varchar2(1)

create table currencies
    iso_code varchar2(3) constraint curr_pk primary key,
    curr_name varchar2(100)

For reasons which will become apparent, we’ll also include this procedure, complete with “typo” to ensure it doesn’t compile…

create or replace procedure add_currency
	i_iso_code currencies.iso_code%type,
	i_curr_name currencies.curr_name%type

	-- Deliberate Mistake...
	brick it for brexit !
	insert into currencies( iso_code, curr_name)
	values( i_iso_code, i_curr_name);
end add_currency;

The examples that follow are based on the assumption that you are connected as the BREXIT user.

First up….

Spotting tables with No Primary Keys

Say that we want to establish whether a Primary Key has been defined for each table in the schema.
Specifically, we want to check permanent tables which comprise the core application tables. We’re less interested in checking on Global Temporary Tables or External Tables.
Rather than wading through the relevant DDL scripts, we can get the Data Dictionary to do the work for us :

select table_name
from user_tables
where temporary = 'N' -- exclude GTTs
and table_name not in
    -- exclude External Tables ...
    select table_name
    from user_external_tables
and table_name not in
    -- see if table has a Primary Key
    select table_name
    from user_constraints
    where constraint_type = 'P'


It looks like someone forgot to add constraints to the countries table. I blame the shock of Brexit. Anyway, we’d better fix that…

alter table countries add constraint
	coun_pk primary key (iso_code)

…and add an RI constraint whilst we’re at it…

alter table countries add constraint
	coun_curr_fk foreign key (curr_code) references currencies( iso_code)

…so that I’ve got some data with which to test…

Foreign Keys with No Indexes

In OLTP applications especially, it’s often a good idea to index any columns that are subject to a Foreign Key constraint in order to improve performance.
To see if there are any FK columns in our application that may benefit from an index…

with cons_cols as
    select cons.table_name,  cons.constraint_name,
        listagg(cols.column_name, ',') within group (order by cols.position) as columns
    from user_cons_columns cols
    inner join user_constraints cons
		on cols.constraint_name = cons.constraint_name
	where cons.constraint_type = 'R'
    group by cons.table_name, cons.constraint_name
ind_cols as
select ind.table_name, ind.index_name,
    listagg(ind.column_name, ',') within group( order by ind.column_position) as columns
from user_ind_columns  ind
group by ind.table_name, ind.index_name
select cons_cols.table_name, cons_cols.constraint_name, cons_cols.columns
from cons_cols
where cons_cols.table_name not in
    select ind_cols.table_name
    from ind_cols
    where ind_cols.table_name = cons_cols.table_name
    and ind_cols.columns like cons_cols.columns||'%'

Sure enough, when we run this as BREXIT we get…

------------------------------ -------------------- ------------------------------

Post Deployment Checks

It’s not just the Data Model that you can keep track of.
If you imagine a situation where we’ve just released the BREXIT code to an environment, we’ll want to check that everything has worked as expected. To do this, we may well recompile any PL/SQL objects in the schema to ensure that everything is valid….

exec dbms_utility.compile_schema(user)

…but once we’ve done this we want to make sure. So…

select object_name, object_type
from user_objects
where status = 'INVALID'
select constraint_name, 'CONSTRAINT'
from user_constraints
where status = 'DISABLED'

------------------------------ -------------------

Hmmm, I think we’d better fix that, but how do we find out what the error is without recompiling ? hmmm…

select line, position, text
from user_errors
where name = 'ADD_CURRENCY'
and type = 'PROCEDURE'
order by sequence

---- -------- --------------------------------------------------------------------------------
  10        8 PLS-00103: Encountered the symbol &amp;quot;IT&amp;quot; when expecting one of the following:     

                 := . ( @ % ;
Impact Analysis

Inevitably, at some point during the life of your application, you will need to make a change to it. This may well be a change to a table structure, or even to some reference data you previously thought was immutable.
In such circumstances, you really want to get a reasonable idea of what impact the change is going to have in terms of changes to your application code.
For example, if we need to make a change to the CURRENCIES table…

select name, type
from user_dependencies
where referenced_owner = user
and referenced_name = 'CURRENCIES'
and referenced_type = 'TABLE'
union all
select child.table_name, 'TABLE'
from user_constraints child
inner join user_constraints parent
	on child.r_constraint_name = parent.constraint_name
where child.constraint_type = 'R'
and parent.table_name = 'CURRENCIES'

NAME                           TYPE
------------------------------ ------------------
ADD_CURRENCY                   PROCEDURE
COUNTRIES                      TABLE             

Now we know the objects that are potentially affected by this proposed change, we have the scope of our Impact Analysis, at least in terms of objects in the database.


As always, there’s far more to the Data Dictionary than what we’ve covered here.
Steven Feuerstein has written a more PL/SQL focused article on this topic.
That about wraps it up for now, so time for Mexit.

Filed under: Oracle, PL/SQL, SQL Tagged: Data Dictionary, dbms_utility.compile_schema, dict, dictionary, listagg, thick database paradigm, user_constraints, user_cons_columns, USER_DEPENDENCIES, user_errors, user_ind_columns, user_objects, user_tables


Subscribe to Oracle FAQ aggregator