Probably the best way to get to know your users is to watch them work, in their typical environment. That, and getting to talk to them right after observing them. It’s from that perspective that you can really see what works, what doesn’t, and what people don’t like. And this is exactly what we want to learn about in our quest to improve our users’ experience using Oracle software.
That said, we’ve been eager to get out and do some site visits, particularly for learning more about supply chain management (SCM). For one, SCM is an area most of us on the team haven’t spent too much time working on. But two, at least for me–working mostly in the abstract, or at least the virtual—there’s something fascinating and satisfying about how physical products and materials move throughout the world, starting as one thing and being manufactured or assembled into something else.
We had a contact at Micros, so we started there. Also, they’re an Oracle company, so that made it much easier. You’ve probably encountered Micros products, even if you haven’t noticed them—Micros does point of sales (POS) systems for retail and hospitality, meaning lots of restaurants, stadiums, and hotels.
For this particular adventure, we teamed up with the SCM team within OAUX, and went to Hanover, Maryland, where Micros has its warehouse operations, and where all of its orders are put together and shipped out across the world.
We observed and talked to a variety of people there: the pickers, who grab all the pieces for an order; the shippers, who get the orders ready to ship out and load them on the trucks; receiving, who takes in all the new inventory; QA, who have to make sure incoming parts are OK, as well as items that are returned; and cycle counters, who count inventory on a nightly basis. We also spoke to various managers and people involved in the business end of things.
In addition to following along and interviewing different employees, the SCM team ran a focus group, and the AppsLab team ran something like a focus group, but which is called a User Journey Map. With this research method, you have users map out their tasks (using sticky notes, a UX researcher’s best friend), while also including associated thoughts and feelings corresponding to each step of each task. We don’t just want to know what users are doing or have to do, but how they feel about it, and the kinds of questions they may have.
In an age where we’re accustomed to pressing a button and having something we want delivered in two days (or less), it’s helpful on a personal level to see how this sort of thing actually happens, and all the people involved in the background. On a professional level, you see how software plays a role in all of it—keeping it all together, but also imposing limits on what can be done and what can be tracked.
This was my first site visit, though I hope there are plenty more in the future. There’s no substitute for this kind of direct observation, where you can also ask questions. You come back tired, but with lots of notes, and lots of new insights.Possibly Related Posts:
As you certainly know, the SQL Server 2014 SP2 has been released by Microsoft with some interesting improvements that concern SQL Server AlwaysOn and availability groups feature. In fact, all of these improvements are also included into SQL Server 2012 SP3 and SQL Server 2016. Among all fixes and improvements that concern AlwaysOn, I would like to focus on those described in the Microsoft KB3173156 and KB3112363. But in this first blog post, let’s say that I will just cover the improvement about the lease timeout which is part of the AlwaysOn health model.
Did you already face lease timeout issue ? If yes, you have certainly notice dit is an good indicator of system wide problem and figure out what is the root cause could be a burden task because we missed diagnostic information and we had to correlate different performance metrics as well. Fortunately, the release of new service packs provide enhancements in this area.
Let’s take an example with a 100% CPU utilization scenario that leads to make the primary replica unresponsive and unable to respond to cluster isAlive() routine. This is typically a situation where we may face a lease timeout issue. After simulating this scenario on my lab environment,here what I found in the SQL Server error log from my primary replica. (I have voluntary filtered to include only the sample we want to focus on).
Firstly, we may see different new messages related to lease timeout issues between the range interval 12:39:54 – 12:43:22. For example, the WSFC did not receive a process event signal from SQL Server within the lease timeout period or the lease between AG and the WSFC has expired. Diagnostic messages have been enhanced to give us a better understanding of the lease issue. But at this point we know we are facing lease timeout but we don’t know the root cause yet. Imrovements have also been extented to the cluster log in order to provide more insights to the system behavior at the moment of the lease timeout issue as we may see below:
00000644.00000768::2016/07/15-12:40:06.575 ERR [RCM] rcm::RcmResource::HandleFailure: (TestGrp)
00000644.00000c84::2016/07/15-12:40:06.768 INFO [GEM] Node 2: Sending 1 messages as a batched GEM message
00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] resource TestGrp: failure count: 0, restartAction: 0 persistentState: 1.
00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] numDependents is zero, auto-returning true
00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] Will queue immediate restart (500 milliseconds) of TestGrp after terminate is complete.
00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] Res TestGrp: ProcessingFailure -> WaitingToTerminate( DelayRestartingResource )
00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] TransitionToState(TestGrp) ProcessingFailure–>[WaitingToTerminate to DelayRestartingResource].
00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] Res TestGrp: [WaitingToTerminate to DelayRestartingResource] -> Terminating( DelayRestartingResource )
00000644.00000768::2016/07/15-12:40:06.768 INFO [RCM] TransitionToState(TestGrp) [WaitingToTerminate to DelayRestartingResource]–>[Terminating to DelayRestartingResource].
00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] Lease timeout detected, logging perf counter data collected so far
00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] Date/Time, Processor time(%), Available memory(bytes), Avg disk read(secs), Avg disk write(secs)
00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:24.0, 8.866394, 912523264.000000, 0.000450, 0.000904
00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:34.0, 25.287347, 919531520.000000, 0.001000, 0.000594
00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:44.0, 25.360508, 921534464.000000, 0.000000, 0.001408
00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:39:55.0, 81.225454, 921903104.000000, 0.000513, 0.000640
00000cc0.00001350::2016/07/15-12:40:12.452 WARN [RES] SQL Server Availability Group: [hadrag] 7/15/2016 10:40:5.0, 100.000000, 922415104.000000, 0.002800, 0.002619
00000cc0.00001350::2016/07/15-12:40:12.452 INFO [RES] SQL Server Availability Group: [hadrag] Stopping Health Worker Thread
According to the SQL Server error log time range we may notice similar messages that concern the detection of lease timeout with some additional information that came from the perfmon counters (Concerned lines are underlined in the sample above). If we reformat the concerned portion into the table below we may get a better identification of our issueDate/Time Processor time (%) Availability memory(bytes) Avg disk read(secs) Avg disk write(secs) 10:39:24.0 8.866394 912523264 912523264 0.000904 10:39:34.0 25.287347 919531520 0.001000 0.000594 10:39:44.0 25.360508 921534464 0.000000 0.001408 10:39:55.0 81.225454 921903104 0.000513 0.000640 10:40:5.0 100.000000 922415104 0.002800 0.002619
CPU utilization is what we must focus on here. So getting this valuable information directly to the cluster.log when we troubleshoot lease timeout issue will help us a lot. But just to clarify, this doesn’t mean that it was not possible with older versions but we have to retrieve them in a more complicated way (by using the AlwaysOn_health extended event for example).
Next, other improvements concern existing extended events like availability_group_lease_expired and hadr_ag_lease_renewal. The next picture points out new available fields like current_time, new_timeout and state as well.
Let me show you their interest with another example. This time, I voluntary hang my sqlserver.exe process related to the primary replica in order to trigger an unresponsive lease scenario. I got interesting outputs from the extended event trace on both sides.
From the former primary, there are no related records during the period of the SQL Server process responsiveness but we may see a record at 17:19:11. The lease renewal process fails and we get a better picture of the problem by looking at the corresponding state (LeaseNotValid) followed by the availability_group_lease_expired event. Note that the current_time (time at which the lease expired) value is greater than the new_timeout (time out time, when availability_group_lease_expired is raised) value here – 3215765 > 3064484 – which confirms that we experienced a timeout issue in this case.
On the new primary, we may notice the start of the lease worker thread but until the concerned replica stabilizes the PRIMARY ONLINE state, it voluntary postpones the lease check process (materialized by StartedExcessLeaseSleep / ExcessSleepSucceeded state values).
In the next blog I will talk about improvements in the detection of the availability group replication latency.
Cet article SQL Server AlwaysOn: new services packs and new diagnostic capabilities est apparu en premier sur Blog dbi services.
If your Oracle SID doesn’t match your instance name in init.ora, this is quite confusing.
Check my previous post, what is sid in oracle
In the instance_name column of the view v$instance, as well as in USERENV context, it matches the ORACLE_SID of the underlying operating system.
SQL> var ORACLE_SID varchar2(9) SQL> set autoprint on SQL> exec dbms_system.get_env('ORACLE_SID',:ORACLE_SID) PL/SQL procedure successfully completed. ORACLE_SID ------------ ORA001 SQL> select sys_context('USERENV','INSTANCE_NAME') from dual; SYS_CONTEXT('USERENV','INSTANCE_NAME') --------------------------------------- ORA001 SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- ORA001 SQL>
This is not the same as the init.ora parameter
SQL> select name, value, description from v$parameter where name='instance_name'; NAME VALUE DESCRIPTION ------------- --------- ---------------------------------------- instance_name INS001 instance name supported by the instance SQL>
The instance_name doesn’t have to match anything. It’s of relevance if you use ADR. And you probably do. Background dump dest and family are deprecated now. In your ADR docu you’ll read
But this SID is actually your init.ora instance name. And not your ORACLE_SID.
You don’t need this on Powershell 5.0 and upwards because there’s a built-in cmdlet, but for previous versions:
convertfrom-csv $(schtasks /Query /S server1 /TN "run somesstuff" /V /FO CSV)
HostName : server1
TaskName : \run somesstuff
Next Run Time : N/A
Status : Ready
Logon Mode : Interactive only
Last Run Time : 13/07/2016 10:05:43
Last Result : 0
Author : matt
Task To Run : C:\powershell\Modules\somesstuff-PCs\run-somesstuff.bat
Start In : N/A
Comment : Scheduled job which does some stuff
Scheduled Task State :
Idle Time :
Power Management :
Run As User :
Delete Task If Not Rescheduled :
Stop Task If Runs X Hours and X Mins :
Schedule Type :
Start Time :
Start Date :
End Date :
Repeat: Every :
Repeat: Until: Time :
Repeat: Until: Duration :
Repeat: Stop If Still Running :
HostName : More detail at http://ourwebsite
TaskName : Enabled
Next Run Time : Disabled
Status : Stop On Battery Mode, No Start On Batteries
Logon Mode : matt
Last Run Time : Enabled
Last Result : 72:00:00
Author : Scheduling data is not available in this format.
Task To Run : One Time Only
Start In : 10:20:21
Comment : 25/05/2016
Scheduled Task State : N/A
Idle Time : N/A
Power Management : N/A
Run As User : Disabled
Delete Task If Not Rescheduled : Disabled
Stop Task If Runs X Hours and X Mins : Disabled
Schedule : Disabled
Schedule Type :
Start Time :
Start Date :
End Date :
Repeat: Every :
Repeat: Until: Time :
Repeat: Until: Duration :
Repeat: Stop If Still Running :
This is outputting from schtasks in csv format, then importing that into a PowerShell object.
When performing a PeopleSoft security audit, reviewing what rights and privileges individual users have been granted for system and application security privileges (authorization) is one of the key deliverables. The following are several of the topics that Integrigy investigates during our PeopleSoft security configuration assessments - take a look today at your settings:
Review users with access to
- The SQR folder
- Process scheduler
- Security and other sensitive administration menus
- Security and other sensitive administration roles
- Web profiles
- PeopleSoft Administrator Role
- Correction mode
To check access to PeopleTools, use the following. If you need assistance with the other topics, let us know –
-- Access to PeopleTools
SELECT UNIQUE A.OPRID, A.OPRDEFNDESC, A.ACCTLOCK, B.ROLENAME
FROM SYSADM.PSOPRDEFN A, SYSADM.PSROLEUSER B
WHERE A.OPRID = B.ROLEUSER
AND upper(B.ROLENAME) ='PEOPLETOOLS'
ORDER BY A.OPRID,B.ROLENAME;
If you have questions, please contact us at firstname.lastname@example.org
Michael A. Miller, CISSP-ISSMP, CCSPReferences
Recently at Rittman Mead we have been asked a lot of questions surrounding Oracle’s new Data Visualization Desktop tool and how it integrates with OBIEE. Rather than referring people to the Oracle docs on DVD, I decided to share with you my experience connecting to an OBIEE 12c instance and take you through some of the things I learned through the process.
In a previous blog, I went though database connections with Data Visualization Desktop and how to create reports using data pulled directly from the database. Connecting to DVD to OBIEE is largely the same process, but allows the user to pull in data at pre-existing report level. I decided to use our 12c ChitChat demo server as the OBIEE source and created some sample reports in answers to test out with DVD.
From the DVD Data Sources page, clicking "Create New Data Source" brings up a selection pane with the option to select “From Oracle Applications.”
Clicking this option brings up a connection screen with options to enter a connection name, URL (location of the reports you want to pull in as a source), username, and password respectively. This seems like a pretty straightforward process. Reading the Oracle docs on connectivity to OBIEE with DVD say to navigate to the web catalog, select the folder containing the analysis you want to use as a source, and then copy and paste the URL from your browser into the URL connection in DVD. However, using this method will cause the connection to fail.
To get Data Visualization Desktop to connect properly, you have to use the URL that you would normally use to log into OBIEE analytics with the proper username and password.
Once connected, the web catalog folders are displayed.
From here, you can navigate to the analyses you want to use for data sources.
Selecting the analysis you want to use as your data source is the same process as selecting schemas and tables from a database source. Once the selection is made, a new screen is displayed with all of the tables and columns that were used for the analysis within OBIEE.
From here you can specify each column as an attribute or measure column and change the aggregation for your measures to something other than what was imported with the analysis.
Clicking "Add to Project" loads all the data into DVD under Data Elements and is displayed on the right hand side just like subject area contents in OBIEE.
The objective of pulling data in from existing analyses is described by Oracle as revisualization. Keep in mind that Data Visualization Desktop is meant to be a discovery tool and not so much a day-to-day report generator.
The original report was a pivot table with Revenue and Order information for geographical, product and time series dimensions. Let’s say that I just wanted to look at the revenue for all cities located in the Americas by a specific brand for the year 2012.
Dragging in the appropriate columns and adding filters took seconds and the data loaded almost instantaneously. I changed the view to horizontal bar and added a desc sort to Revenue and this was my result:
Notice how the revenue for San Fransisco is much higher than any of the other states. Let’s say I want to get a closer look at all the other states without seeing the revenue data for San Fransisco. I could create a new filter for City and exclude San Fransisco from the list or I could just create a filter range for Revenue. Choosing the latter gave me the option of moving a slider to change my revenue value distribution and showed me the results in real time. Pretty cool, right?
Taking one report and loading it in can open up a wide range of data discovery opportunities but what if there are multiple reports I want to pull data from? You can do this and combine the data together in DVD as long as the two reports contain columns to join the two together.
Going back to my OBIEE connection, there are two reports I created on the demo server that both contain customer data.
By pulling in both the Customer Information and Number of Customer Orders Per Year report, Data Visualization Desktop creates two separate data sources which show up under Data Elements.
Inspecting one of the data sources shows the match between the two is made on both Customer Number and Customer Name columns.
Note: It is possible to make your own column matches manually using the Add Another Match feature.
By using two data sets from two different reports, you can blend the data together to discover trends, show outliers and view the data together without touching the database or having to create new reports within OBIEE.
The ability to connect directly to OBIEE with Data Visualization Desktop and pull in data from individual analyses is a very powerful feature that makes DVD’s that much greater. Combining data from multiple analyses blend them together internally creates some exciting data discovery possibilities for users with existing OBIEE implementations.
Redwood Shores, Calif.—Jul 18, 2016
Oracle is proud to provide Oracle Cloud technology and engineering resources to the White House Office of Science and Technology Policy’s program Platforms Enabling Advanced Wireless Program (PAWR). The program is led by the National Science Foundation, the nonprofit organization US Ignite, and a consortium of industry and academic leaders collaborating to better understand the unique challenges and opportunities created by next generation platforms for networking.
Oracle Communications will provide core network control, analytics and network orchestration technology to researchers and help them understand the impact of subscriber behavior, enhance orchestration, and bolster security. Oracle's contributions, in this groundbreaking initiative, will aid the advancement in wireless technology areas by:
- Discovering how applied analytics can help minimize negative impacts on orchestration, and improve overall network and service performance;
- Monitoring and measuring networks in the new environment to provide optimal performance and reliability;
- Analyzing capacity in a virtual network, making resources available (such as hardware and licenses) when needed;
- Identifying new formulas and metrics to engineer and secure cloud-based telecom networks;
- Determining what impact subscriber behaviors and events have on network/service orchestration.
The research and development from Oracle will assist in the understanding of protecting from network abuse through legitimate network connections to ensure even ‘trusted’ networks cannot abuse their access. This will include analyzing the impact from other networks through misconfigurations or malformed packets. Additionally, our contributions will help set up standards, procedures and principles for the Telecommunications cloud.
“We see an opportunity to bring the power and flexibility of the cloud to telecommunications,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “As a means to better understand the impact of subscriber behaviors to traffic engineering, how analytics can enhance orchestration at the network and service levels, and how to bolster security of the control plane to protect against malicious behavior.”
Oracle is proud to join as a founding board member of PAWR, an organization steering the research agenda and policy issues for US Ignite, responsible for the design, scope, and research goals for its members.
- White House Fact Sheet
- National Science Foundation @NSF
- The White House Office of Science and Technology Policy @whitehouseostp
- US Ignite @US_Ignite
- #advancedwireless | #PAWR
- To learn more about communications network scalability and security, please connect with us on Twitter @OracleComms and at facebook.com/oraclecommunications, or visit oracle.com/communications.
Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.Safe Harbor
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation.
- +1 202.904.1138
Hard to believe it’s been nearly three years since we debuted the Leap Motion-controlled robot arm. Since then, it’s been a mainstay demo for us, combining a bit of fun with the still-emergent interaction mechanism, gesture.
Anthony (@anthonyslai) remains the master of the robot arm, and since we lost access to the original video, Noel (@noelportugal) shot a new one in the Gadget Lab at HQ where the robot arm continues to entertain visitors.
Interesting note, Amazon showed a very similar demo when they debuted AWS IoT. We nerds love robots.
We continue to investigate gesture as an interaction; in addition to our work with the Leap Motion as a robot arm controller and as a feature in the Smart Office, we’ve also used the Myo armband to drive Anki race cars, a project Thalmic Labs featured on their developer blog.
Gesture remains a Wild West, with no standards and different implementations, but we think there’s something to it. And we’ll keep investigating and having some fun while we do.
Stay tuned.Possibly Related Posts:
Sample application (ADFBCRestApp_v8.zip) implements custom method, exposed through ADF BC REST - calculateEmployees. This method is created in VO Implementation class and it accepts two parameters - firstName and lastName. Method works correctly, I can execute it through POST, by passing predefined payload with method name and parameters (read more in developer guide - 22.12.5 Executing a Custom Action):
Make sure not to forget to specify Content-Type, otherwise POST request to ADF BC REST will fail:
Let's see custom method implementation and where workaround is required. Custom method is using View Criteria to filter VO and return estimated row count. All fine here:
Method should be exposed through VO client interface:
We should generate custo method binding registry in REST resource custom methods section (client interface). In JDEV 12.2.1 this works by clicking checkbox for Enable, but in JDEV 220.127.116.11 the same throws error (can't enable custom method to be called through REST):
Luckily there is a workaround. We can define method binding manually, go to source mode in REST resource definition dialog and add methodAction for custom method. You can replace method name, ID, instance name, etc. REST resource definition looks very similar to page definition file we are using to define bindings available for ADF Faces. ADF BC REST interface seems to be designed on common principles with ADF bindings, at least from definition point of view:
I’ve got to say that it’s no surprise that were leaving Europe. It’s just that we expected it to be on penalties, probably to Germany.
Obviously, that “we” in the last gag is England. Wales and Northern Ireland have shown no sense of decorum and continued to antagonise our European Partners by beating them at football.
Currently, the national mood seems to be that of a naughty child who stuck their fingers in the light socket to see what would happen, and were awfully surprised when it did.
In the midst of all this uncertainty, I’ve decided to seek comfort in the reassuringly familiar.
Step forward the Oracle Data Dictionary – Oracle’s implementation of the Database Catalog.
However closely you follow the Thick Database Paradigm, the Data Dictionary will serve as the Swiss Army Knife in your toolkit for ensuring Maintainability.
I’ll start of with a quick (re)introduction of the Data Dictionary and how to search it using the DICTIONARY view.
Then I’ll cover just some of the ways in which the Data Dictionary can help you to get stones out of horses hooves keep your application healthy.
Right then….What’s in the Data Dictionary ?
The answer is, essentially, metadata about any objects you have in your database down to and including source code for any stored program units.
Data Dictionary views tend to come in three flavours :
- USER_ – anything owned by the currently connected user
- ALL_ – anything in USER_ plus anything the current user has access to
- DBA_ – anything in the current database
The Data Dictionary has quite a lot of stuff in it, as you can tell by running this query :
select count(*) from dictionary /
You can sift through this mountain of information by having a look at the comments available in DICTIONARY (DICT to it’s friends) for each of the Views listed.
select comments from dict where table_name = 'USER_TABLES' / COMMENTS -------------------------------------------------- Description of the user's own relational tables
You can see a graphical representation of these USER_ views in whatever Oracle IDE you happen to be using.
For example, in SQLDeveloper…
This graphical tree view corresponds roughly to the following Data Dictionary views :View Name DICT Comments Additional Comments USER_TABLES Description of the user’s own relational tables USER_VIEWS Description of the user’s own views USER_EDITIONING_VIEWS Descriptions of the user’s own Editioning Views USER_INDEXES Description of the user’s own indexes USER_OBJECTS Objects owned by the user This includes functions, packages, procedures etc USER_QUEUES All queues owned by the user ALL_QUEUE_TABLES All queue tables accessible to the user USER_TRIGGERS Triggers having FOLLOWS or PRECEDES ordering owned by the user Includes Cross Edition Triggers USER_TYPES Description of the user’s own types USER_MVIEW_LOGS All materialized view logs owned by the user USER_SEQUENCES Description of the user’s own SEQUENCEs USER_SYNONYMS The user’s private synonyms ALL_SYNONYMS All synonyms for base objects accessible to the user and session Includes PUBLIC synonyms USER_DB_LINKS Database links owned by the user ALL_DB_LINKS Database links accessible to the user ALL_DIRECTORIES Description of all directories accessible to the user ALL_EDITIONS Describes all editions in the database USER_XML_SCHEMAS Description of XML Schemas registered by the user USER_SCHEDULER_JOBS All scheduler jobs in the database RESOURCE_VIEW Whilst not part of the DICTIONARY per se, you can see details of XML DB Schema in this view USER_RECYCLEBIN User view of his recyclebin ALL_USERS Information about all users of the database
As all of this metadata is available in views, it can be interrogated programatically via SQL, as we’ll discover shortly. Before that though, let’s introduce…The Brexit Schema
To add an element of topicality, the following examples will be based on this schema.
The user creation script looks like this :
grant connect, create table, create procedure, create sequence to brexit identified by ceul8r / alter user brexit default tablespace users / alter user brexit quota unlimited on users /
You’ll probably want to choose your own (weak) pun-based password.
The tables in this schema are ( initially at least)…
create table countries ( iso_code varchar2(3), coun_name varchar2(100) not null, curr_code varchar2(3) not null, is_eu_flag varchar2(1) ) / create table currencies ( iso_code varchar2(3) constraint curr_pk primary key, curr_name varchar2(100) ) /
For reasons which will become apparent, we’ll also include this procedure, complete with “typo” to ensure it doesn’t compile…
create or replace procedure add_currency ( i_iso_code currencies.iso_code%type, i_curr_name currencies.curr_name%type ) as begin -- Deliberate Mistake... brick it for brexit ! insert into currencies( iso_code, curr_name) values( i_iso_code, i_curr_name); end add_currency; /
The examples that follow are based on the assumption that you are connected as the BREXIT user.
First up….Spotting tables with No Primary Keys
Say that we want to establish whether a Primary Key has been defined for each table in the schema.
Specifically, we want to check permanent tables which comprise the core application tables. We’re less interested in checking on Global Temporary Tables or External Tables.
Rather than wading through the relevant DDL scripts, we can get the Data Dictionary to do the work for us :
select table_name from user_tables where temporary = 'N' -- exclude GTTs and table_name not in ( -- exclude External Tables ... select table_name from user_external_tables ) and table_name not in ( -- see if table has a Primary Key select table_name from user_constraints where constraint_type = 'P' ) / TABLE_NAME ------------------------------ COUNTRIES
It looks like someone forgot to add constraints to the countries table. I blame the shock of Brexit. Anyway, we’d better fix that…
alter table countries add constraint coun_pk primary key (iso_code) /
…and add an RI constraint whilst we’re at it…
alter table countries add constraint coun_curr_fk foreign key (curr_code) references currencies( iso_code) /
…so that I’ve got some data with which to test…Foreign Keys with No Indexes
In OLTP applications especially, it’s often a good idea to index any columns that are subject to a Foreign Key constraint in order to improve performance.
To see if there are any FK columns in our application that may benefit from an index…
with cons_cols as ( select cons.table_name, cons.constraint_name, listagg(cols.column_name, ',') within group (order by cols.position) as columns from user_cons_columns cols inner join user_constraints cons on cols.constraint_name = cons.constraint_name where cons.constraint_type = 'R' group by cons.table_name, cons.constraint_name ), ind_cols as ( select ind.table_name, ind.index_name, listagg(ind.column_name, ',') within group( order by ind.column_position) as columns from user_ind_columns ind group by ind.table_name, ind.index_name ) select cons_cols.table_name, cons_cols.constraint_name, cons_cols.columns from cons_cols where cons_cols.table_name not in ( select ind_cols.table_name from ind_cols where ind_cols.table_name = cons_cols.table_name and ind_cols.columns like cons_cols.columns||'%' ) /
Sure enough, when we run this as BREXIT we get…
TABLE_NAME CONSTRAINT_NAME COLUMNS ------------------------------ -------------------- ------------------------------ COUNTRIES COUN_CURR_FK CURR_CODEPost Deployment Checks
It’s not just the Data Model that you can keep track of.
If you imagine a situation where we’ve just released the BREXIT code to an environment, we’ll want to check that everything has worked as expected. To do this, we may well recompile any PL/SQL objects in the schema to ensure that everything is valid….
…but once we’ve done this we want to make sure. So…
select object_name, object_type from user_objects where status = 'INVALID' union select constraint_name, 'CONSTRAINT' from user_constraints where status = 'DISABLED' / OBJECT_NAME OBJECT_TYPE ------------------------------ ------------------- ADD_CURRENCY PROCEDURE
Hmmm, I think we’d better fix that, but how do we find out what the error is without recompiling ? hmmm…
select line, position, text from user_errors where name = 'ADD_CURRENCY' and type = 'PROCEDURE' order by sequence / LINE POSITION TEXT ---- -------- -------------------------------------------------------------------------------- 10 8 PLS-00103: Encountered the symbol &quot;IT&quot; when expecting one of the following: := . ( @ % ;Impact Analysis
Inevitably, at some point during the life of your application, you will need to make a change to it. This may well be a change to a table structure, or even to some reference data you previously thought was immutable.
In such circumstances, you really want to get a reasonable idea of what impact the change is going to have in terms of changes to your application code.
For example, if we need to make a change to the CURRENCIES table…
select name, type from user_dependencies where referenced_owner = user and referenced_name = 'CURRENCIES' and referenced_type = 'TABLE' union all select child.table_name, 'TABLE' from user_constraints child inner join user_constraints parent on child.r_constraint_name = parent.constraint_name where child.constraint_type = 'R' and parent.table_name = 'CURRENCIES' / NAME TYPE ------------------------------ ------------------ ADD_CURRENCY PROCEDURE COUNTRIES TABLE
Now we know the objects that are potentially affected by this proposed change, we have the scope of our Impact Analysis, at least in terms of objects in the database.Conclusion
As always, there’s far more to the Data Dictionary than what we’ve covered here.
Steven Feuerstein has written a more PL/SQL focused article on this topic.
That about wraps it up for now, so time for Mexit.
Filed under: Oracle, PL/SQL, SQL Tagged: Data Dictionary, dbms_utility.compile_schema, dict, dictionary, listagg, thick database paradigm, user_constraints, user_cons_columns, USER_DEPENDENCIES, user_errors, user_ind_columns, user_objects, user_tables