You don’t need this on Powershell 5.0 and upwards because there’s a built-in cmdlet, but for previous versions:
convertfrom-csv $(schtasks /Query /S server1 /TN "run somesstuff" /V /FO CSV)
HostName : server1
TaskName : \run somesstuff
Next Run Time : N/A
Status : Ready
Logon Mode : Interactive only
Last Run Time : 13/07/2016 10:05:43
Last Result : 0
Author : matt
Task To Run : C:\powershell\Modules\somesstuff-PCs\run-somesstuff.bat
Start In : N/A
Comment : Scheduled job which does some stuff
Scheduled Task State :
Idle Time :
Power Management :
Run As User :
Delete Task If Not Rescheduled :
Stop Task If Runs X Hours and X Mins :
Schedule Type :
Start Time :
Start Date :
End Date :
Repeat: Every :
Repeat: Until: Time :
Repeat: Until: Duration :
Repeat: Stop If Still Running :
HostName : More detail at http://ourwebsite
TaskName : Enabled
Next Run Time : Disabled
Status : Stop On Battery Mode, No Start On Batteries
Logon Mode : matt
Last Run Time : Enabled
Last Result : 72:00:00
Author : Scheduling data is not available in this format.
Task To Run : One Time Only
Start In : 10:20:21
Comment : 25/05/2016
Scheduled Task State : N/A
Idle Time : N/A
Power Management : N/A
Run As User : Disabled
Delete Task If Not Rescheduled : Disabled
Stop Task If Runs X Hours and X Mins : Disabled
Schedule : Disabled
Schedule Type :
Start Time :
Start Date :
End Date :
Repeat: Every :
Repeat: Until: Time :
Repeat: Until: Duration :
Repeat: Stop If Still Running :
This is outputting from schtasks in csv format, then importing that into a PowerShell object.
When performing a PeopleSoft security audit, reviewing what rights and privileges individual users have been granted for system and application security privileges (authorization) is one of the key deliverables. The following are several of the topics that Integrigy investigates during our PeopleSoft security configuration assessments - take a look today at your settings:
Review users with access to
- The SQR folder
- Process scheduler
- Security and other sensitive administration menus
- Security and other sensitive administration roles
- Web profiles
- PeopleSoft Administrator Role
- Correction mode
To check access to PeopleTools, use the following. If you need assistance with the other topics, let us know –
-- Access to PeopleTools
SELECT UNIQUE A.OPRID, A.OPRDEFNDESC, A.ACCTLOCK, B.ROLENAME
FROM SYSADM.PSOPRDEFN A, SYSADM.PSROLEUSER B
WHERE A.OPRID = B.ROLEUSER
AND upper(B.ROLENAME) ='PEOPLETOOLS'
ORDER BY A.OPRID,B.ROLENAME;
If you have questions, please contact us at firstname.lastname@example.org
Michael A. Miller, CISSP-ISSMP, CCSPReferences
Recently at Rittman Mead we have been asked a lot of questions surrounding Oracle’s new Data Visualization Desktop tool and how it integrates with OBIEE. Rather than referring people to the Oracle docs on DVD, I decided to share with you my experience connecting to an OBIEE 12c instance and take you through some of the things I learned through the process.
In a previous blog, I went though database connections with Data Visualization Desktop and how to create reports using data pulled directly from the database. Connecting to DVD to OBIEE is largely the same process, but allows the user to pull in data at pre-existing report level. I decided to use our 12c ChitChat demo server as the OBIEE source and created some sample reports in answers to test out with DVD.
From the DVD Data Sources page, clicking "Create New Data Source" brings up a selection pane with the option to select “From Oracle Applications.”
Clicking this option brings up a connection screen with options to enter a connection name, URL (location of the reports you want to pull in as a source), username, and password respectively. This seems like a pretty straightforward process. Reading the Oracle docs on connectivity to OBIEE with DVD say to navigate to the web catalog, select the folder containing the analysis you want to use as a source, and then copy and paste the URL from your browser into the URL connection in DVD. However, using this method will cause the connection to fail.
To get Data Visualization Desktop to connect properly, you have to use the URL that you would normally use to log into OBIEE analytics with the proper username and password.
Once connected, the web catalog folders are displayed.
From here, you can navigate to the analyses you want to use for data sources.
Selecting the analysis you want to use as your data source is the same process as selecting schemas and tables from a database source. Once the selection is made, a new screen is displayed with all of the tables and columns that were used for the analysis within OBIEE.
From here you can specify each column as an attribute or measure column and change the aggregation for your measures to something other than what was imported with the analysis.
Clicking "Add to Project" loads all the data into DVD under Data Elements and is displayed on the right hand side just like subject area contents in OBIEE.
The objective of pulling data in from existing analyses is described by Oracle as revisualization. Keep in mind that Data Visualization Desktop is meant to be a discovery tool and not so much a day-to-day report generator.
The original report was a pivot table with Revenue and Order information for geographical, product and time series dimensions. Let’s say that I just wanted to look at the revenue for all cities located in the Americas by a specific brand for the year 2012.
Dragging in the appropriate columns and adding filters took seconds and the data loaded almost instantaneously. I changed the view to horizontal bar and added a desc sort to Revenue and this was my result:
Notice how the revenue for San Fransisco is much higher than any of the other states. Let’s say I want to get a closer look at all the other states without seeing the revenue data for San Fransisco. I could create a new filter for City and exclude San Fransisco from the list or I could just create a filter range for Revenue. Choosing the latter gave me the option of moving a slider to change my revenue value distribution and showed me the results in real time. Pretty cool, right?
Taking one report and loading it in can open up a wide range of data discovery opportunities but what if there are multiple reports I want to pull data from? You can do this and combine the data together in DVD as long as the two reports contain columns to join the two together.
Going back to my OBIEE connection, there are two reports I created on the demo server that both contain customer data.
By pulling in both the Customer Information and Number of Customer Orders Per Year report, Data Visualization Desktop creates two separate data sources which show up under Data Elements.
Inspecting one of the data sources shows the match between the two is made on both Customer Number and Customer Name columns.
Note: It is possible to make your own column matches manually using the Add Another Match feature.
By using two data sets from two different reports, you can blend the data together to discover trends, show outliers and view the data together without touching the database or having to create new reports within OBIEE.
The ability to connect directly to OBIEE with Data Visualization Desktop and pull in data from individual analyses is a very powerful feature that makes DVD’s that much greater. Combining data from multiple analyses blend them together internally creates some exciting data discovery possibilities for users with existing OBIEE implementations.
Redwood Shores, Calif.—Jul 18, 2016
Oracle is proud to provide Oracle Cloud technology and engineering resources to the White House Office of Science and Technology Policy’s program Platforms Enabling Advanced Wireless Program (PAWR). The program is led by the National Science Foundation, the nonprofit organization US Ignite, and a consortium of industry and academic leaders collaborating to better understand the unique challenges and opportunities created by next generation platforms for networking.
Oracle Communications will provide core network control, analytics and network orchestration technology to researchers and help them understand the impact of subscriber behavior, enhance orchestration, and bolster security. Oracle's contributions, in this groundbreaking initiative, will aid the advancement in wireless technology areas by:
- Discovering how applied analytics can help minimize negative impacts on orchestration, and improve overall network and service performance;
- Monitoring and measuring networks in the new environment to provide optimal performance and reliability;
- Analyzing capacity in a virtual network, making resources available (such as hardware and licenses) when needed;
- Identifying new formulas and metrics to engineer and secure cloud-based telecom networks;
- Determining what impact subscriber behaviors and events have on network/service orchestration.
The research and development from Oracle will assist in the understanding of protecting from network abuse through legitimate network connections to ensure even ‘trusted’ networks cannot abuse their access. This will include analyzing the impact from other networks through misconfigurations or malformed packets. Additionally, our contributions will help set up standards, procedures and principles for the Telecommunications cloud.
“We see an opportunity to bring the power and flexibility of the cloud to telecommunications,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “As a means to better understand the impact of subscriber behaviors to traffic engineering, how analytics can enhance orchestration at the network and service levels, and how to bolster security of the control plane to protect against malicious behavior.”
Oracle is proud to join as a founding board member of PAWR, an organization steering the research agenda and policy issues for US Ignite, responsible for the design, scope, and research goals for its members.
- White House Fact Sheet
- National Science Foundation @NSF
- The White House Office of Science and Technology Policy @whitehouseostp
- US Ignite @US_Ignite
- #advancedwireless | #PAWR
- To learn more about communications network scalability and security, please connect with us on Twitter @OracleComms and at facebook.com/oraclecommunications, or visit oracle.com/communications.
Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.Safe Harbor
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation.
- +1 202.904.1138
Hard to believe it’s been nearly three years since we debuted the Leap Motion-controlled robot arm. Since then, it’s been a mainstay demo for us, combining a bit of fun with the still-emergent interaction mechanism, gesture.
Anthony (@anthonyslai) remains the master of the robot arm, and since we lost access to the original video, Noel (@noelportugal) shot a new one in the Gadget Lab at HQ where the robot arm continues to entertain visitors.
Interesting note, Amazon showed a very similar demo when they debuted AWS IoT. We nerds love robots.
We continue to investigate gesture as an interaction; in addition to our work with the Leap Motion as a robot arm controller and as a feature in the Smart Office, we’ve also used the Myo armband to drive Anki race cars, a project Thalmic Labs featured on their developer blog.
Gesture remains a Wild West, with no standards and different implementations, but we think there’s something to it. And we’ll keep investigating and having some fun while we do.
Stay tuned.Possibly Related Posts:
Sample application (ADFBCRestApp_v8.zip) implements custom method, exposed through ADF BC REST - calculateEmployees. This method is created in VO Implementation class and it accepts two parameters - firstName and lastName. Method works correctly, I can execute it through POST, by passing predefined payload with method name and parameters (read more in developer guide - 22.12.5 Executing a Custom Action):
Make sure not to forget to specify Content-Type, otherwise POST request to ADF BC REST will fail:
Let's see custom method implementation and where workaround is required. Custom method is using View Criteria to filter VO and return estimated row count. All fine here:
Method should be exposed through VO client interface:
We should generate custo method binding registry in REST resource custom methods section (client interface). In JDEV 12.2.1 this works by clicking checkbox for Enable, but in JDEV 184.108.40.206 the same throws error (can't enable custom method to be called through REST):
Luckily there is a workaround. We can define method binding manually, go to source mode in REST resource definition dialog and add methodAction for custom method. You can replace method name, ID, instance name, etc. REST resource definition looks very similar to page definition file we are using to define bindings available for ADF Faces. ADF BC REST interface seems to be designed on common principles with ADF bindings, at least from definition point of view:
I’ve got to say that it’s no surprise that were leaving Europe. It’s just that we expected it to be on penalties, probably to Germany.
Obviously, that “we” in the last gag is England. Wales and Northern Ireland have shown no sense of decorum and continued to antagonise our European Partners by beating them at football.
Currently, the national mood seems to be that of a naughty child who stuck their fingers in the light socket to see what would happen, and were awfully surprised when it did.
In the midst of all this uncertainty, I’ve decided to seek comfort in the reassuringly familiar.
Step forward the Oracle Data Dictionary – Oracle’s implementation of the Database Catalog.
However closely you follow the Thick Database Paradigm, the Data Dictionary will serve as the Swiss Army Knife in your toolkit for ensuring Maintainability.
I’ll start of with a quick (re)introduction of the Data Dictionary and how to search it using the DICTIONARY view.
Then I’ll cover just some of the ways in which the Data Dictionary can help you to get stones out of horses hooves keep your application healthy.
Right then….What’s in the Data Dictionary ?
The answer is, essentially, metadata about any objects you have in your database down to and including source code for any stored program units.
Data Dictionary views tend to come in three flavours :
- USER_ – anything owned by the currently connected user
- ALL_ – anything in USER_ plus anything the current user has access to
- DBA_ – anything in the current database
The Data Dictionary has quite a lot of stuff in it, as you can tell by running this query :
select count(*) from dictionary /
You can sift through this mountain of information by having a look at the comments available in DICTIONARY (DICT to it’s friends) for each of the Views listed.
select comments from dict where table_name = 'USER_TABLES' / COMMENTS -------------------------------------------------- Description of the user's own relational tables
You can see a graphical representation of these USER_ views in whatever Oracle IDE you happen to be using.
For example, in SQLDeveloper…
This graphical tree view corresponds roughly to the following Data Dictionary views :View Name DICT Comments Additional Comments USER_TABLES Description of the user’s own relational tables USER_VIEWS Description of the user’s own views USER_EDITIONING_VIEWS Descriptions of the user’s own Editioning Views USER_INDEXES Description of the user’s own indexes USER_OBJECTS Objects owned by the user This includes functions, packages, procedures etc USER_QUEUES All queues owned by the user ALL_QUEUE_TABLES All queue tables accessible to the user USER_TRIGGERS Triggers having FOLLOWS or PRECEDES ordering owned by the user Includes Cross Edition Triggers USER_TYPES Description of the user’s own types USER_MVIEW_LOGS All materialized view logs owned by the user USER_SEQUENCES Description of the user’s own SEQUENCEs USER_SYNONYMS The user’s private synonyms ALL_SYNONYMS All synonyms for base objects accessible to the user and session Includes PUBLIC synonyms USER_DB_LINKS Database links owned by the user ALL_DB_LINKS Database links accessible to the user ALL_DIRECTORIES Description of all directories accessible to the user ALL_EDITIONS Describes all editions in the database USER_XML_SCHEMAS Description of XML Schemas registered by the user USER_SCHEDULER_JOBS All scheduler jobs in the database RESOURCE_VIEW Whilst not part of the DICTIONARY per se, you can see details of XML DB Schema in this view USER_RECYCLEBIN User view of his recyclebin ALL_USERS Information about all users of the database
As all of this metadata is available in views, it can be interrogated programatically via SQL, as we’ll discover shortly. Before that though, let’s introduce…The Brexit Schema
To add an element of topicality, the following examples will be based on this schema.
The user creation script looks like this :
grant connect, create table, create procedure, create sequence to brexit identified by ceul8r / alter user brexit default tablespace users / alter user brexit quota unlimited on users /
You’ll probably want to choose your own (weak) pun-based password.
The tables in this schema are ( initially at least)…
create table countries ( iso_code varchar2(3), coun_name varchar2(100) not null, curr_code varchar2(3) not null, is_eu_flag varchar2(1) ) / create table currencies ( iso_code varchar2(3) constraint curr_pk primary key, curr_name varchar2(100) ) /
For reasons which will become apparent, we’ll also include this procedure, complete with “typo” to ensure it doesn’t compile…
create or replace procedure add_currency ( i_iso_code currencies.iso_code%type, i_curr_name currencies.curr_name%type ) as begin -- Deliberate Mistake... brick it for brexit ! insert into currencies( iso_code, curr_name) values( i_iso_code, i_curr_name); end add_currency; /
The examples that follow are based on the assumption that you are connected as the BREXIT user.
First up….Spotting tables with No Primary Keys
Say that we want to establish whether a Primary Key has been defined for each table in the schema.
Specifically, we want to check permanent tables which comprise the core application tables. We’re less interested in checking on Global Temporary Tables or External Tables.
Rather than wading through the relevant DDL scripts, we can get the Data Dictionary to do the work for us :
select table_name from user_tables where temporary = 'N' -- exclude GTTs and table_name not in ( -- exclude External Tables ... select table_name from user_external_tables ) and table_name not in ( -- see if table has a Primary Key select table_name from user_constraints where constraint_type = 'P' ) / TABLE_NAME ------------------------------ COUNTRIES
It looks like someone forgot to add constraints to the countries table. I blame the shock of Brexit. Anyway, we’d better fix that…
alter table countries add constraint coun_pk primary key (iso_code) /
…and add an RI constraint whilst we’re at it…
alter table countries add constraint coun_curr_fk foreign key (curr_code) references currencies( iso_code) /
…so that I’ve got some data with which to test…Foreign Keys with No Indexes
In OLTP applications especially, it’s often a good idea to index any columns that are subject to a Foreign Key constraint in order to improve performance.
To see if there are any FK columns in our application that may benefit from an index…
with cons_cols as ( select cons.table_name, cons.constraint_name, listagg(cols.column_name, ',') within group (order by cols.position) as columns from user_cons_columns cols inner join user_constraints cons on cols.constraint_name = cons.constraint_name where cons.constraint_type = 'R' group by cons.table_name, cons.constraint_name ), ind_cols as ( select ind.table_name, ind.index_name, listagg(ind.column_name, ',') within group( order by ind.column_position) as columns from user_ind_columns ind group by ind.table_name, ind.index_name ) select cons_cols.table_name, cons_cols.constraint_name, cons_cols.columns from cons_cols where cons_cols.table_name not in ( select ind_cols.table_name from ind_cols where ind_cols.table_name = cons_cols.table_name and ind_cols.columns like cons_cols.columns||'%' ) /
Sure enough, when we run this as BREXIT we get…
TABLE_NAME CONSTRAINT_NAME COLUMNS ------------------------------ -------------------- ------------------------------ COUNTRIES COUN_CURR_FK CURR_CODEPost Deployment Checks
It’s not just the Data Model that you can keep track of.
If you imagine a situation where we’ve just released the BREXIT code to an environment, we’ll want to check that everything has worked as expected. To do this, we may well recompile any PL/SQL objects in the schema to ensure that everything is valid….
…but once we’ve done this we want to make sure. So…
select object_name, object_type from user_objects where status = 'INVALID' union select constraint_name, 'CONSTRAINT' from user_constraints where status = 'DISABLED' / OBJECT_NAME OBJECT_TYPE ------------------------------ ------------------- ADD_CURRENCY PROCEDURE
Hmmm, I think we’d better fix that, but how do we find out what the error is without recompiling ? hmmm…
select line, position, text from user_errors where name = 'ADD_CURRENCY' and type = 'PROCEDURE' order by sequence / LINE POSITION TEXT ---- -------- -------------------------------------------------------------------------------- 10 8 PLS-00103: Encountered the symbol &quot;IT&quot; when expecting one of the following: := . ( @ % ;Impact Analysis
Inevitably, at some point during the life of your application, you will need to make a change to it. This may well be a change to a table structure, or even to some reference data you previously thought was immutable.
In such circumstances, you really want to get a reasonable idea of what impact the change is going to have in terms of changes to your application code.
For example, if we need to make a change to the CURRENCIES table…
select name, type from user_dependencies where referenced_owner = user and referenced_name = 'CURRENCIES' and referenced_type = 'TABLE' union all select child.table_name, 'TABLE' from user_constraints child inner join user_constraints parent on child.r_constraint_name = parent.constraint_name where child.constraint_type = 'R' and parent.table_name = 'CURRENCIES' / NAME TYPE ------------------------------ ------------------ ADD_CURRENCY PROCEDURE COUNTRIES TABLE
Now we know the objects that are potentially affected by this proposed change, we have the scope of our Impact Analysis, at least in terms of objects in the database.Conclusion
As always, there’s far more to the Data Dictionary than what we’ve covered here.
Steven Feuerstein has written a more PL/SQL focused article on this topic.
That about wraps it up for now, so time for Mexit.
Filed under: Oracle, PL/SQL, SQL Tagged: Data Dictionary, dbms_utility.compile_schema, dict, dictionary, listagg, thick database paradigm, user_constraints, user_cons_columns, USER_DEPENDENCIES, user_errors, user_ind_columns, user_objects, user_tables
More parts to follow.