Feed aggregator

Oracle’s Work Life Solutions can support every industry—especially this one

Linda Fishman Hoyle - Thu, 2016-11-17 14:59

Mining is an industry that is pivotal to the world’s economy. We wanted to share this recent article from Australia Mining that reports on the needful intersection of the mining industry and Oracle’s HR cloud applications.

As with many industries, mining is being reshaped by external forces, which is causing challenges for human resources.

One of those challenges is the result of increased automation. HR professionals need to ensure the company retains its talent amidst “the drive toward increased productivity and innovation,” says John F Hansen, Oracle’s vice president for Human Capital Management in JAPAC.

Another challenge is the staff’s mental health. The mining industry pays well, but the work can be in remote locations, time intensive, and physically challenging. According to PWC research, conditions such as these can lead to absenteeism, reduced productivity, and compensation claims.

Help comes in the form of innovative HR strategies. Both of these issues can be addressed by the latest generation of HCM technology. Hansen points to Oracle’s Work Life Solutions, which “can be used to help employees manage the intersection of their work and personal lives.”

These applications can produce a more engaged productive workforce. Equally important is that the “ROI for these programs in the mining industry is the highest across all industries.”

Oracle Midlands : Event #18 (Cancelled)

Tim Hall - Thu, 2016-11-17 09:06
Cancelled!

Don’t forget Oracle Midlands Event #18 next week!

om18

If you live near, please show your support for the event and come along. It’s free thanks to the sponsorship by RedStackTech.

Cheers

Tim…

Oracle Midlands : Event #18 (Cancelled) was first posted on November 17, 2016 at 4:06 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

ECMA Script 6 / Nashorn / Java 9 and SQLcl

Kris Rice - Thu, 2016-11-17 08:57
This blog post brought to you by the letter M as in Martin . Follow @martindsouza . He asked me on twitter if SQLcl via Nashorn could use ECMA Script 6 features yet.  The answer is yes.  So, for the brave that want to try out the latest greatest ECMA Script 6 features it can be done with SQLcl. This is a good reference for what's in version 6 that could be useful.  The only caution is not

Enabling the Mobile Workforce with Cloud Content and Experience - Part 3

WebCenter Team - Thu, 2016-11-17 08:41

Author: Mark Paterson, Director, Oracle Documents Cloud Service Product Management

Continuing my series on quick tips on how to use key features of our apps to drive effective mobile collaboration (part 1 and part 2 of the series noted), I’d like to quickly cover how easy it is to review videos and audio flies. For a quick refresher, Oracle Documents Cloud Service is an enterprise-grade content and social collaboration cloud solution that allows you to work and drive decisions with your content and your ecosystem anytime, anywhere and on any device.

Need to review or approve a new video on the move? Oracle Documents Cloud Service provides full support for videos right within our mobile app.

  1. Install the Oracle Documents apps on your iPhone, iPad, or Android device, log on to Oracle Documents Cloud Service, and you’re ready to go. Use it anywhere. It’s designed to be familiar, swiping to navigate and tapping to open folders and files. The app guides you through what to do.

  2. Watching a video stored in Oracle Documents is easy. Launch Oracle Documents, browse to the folder where your video is stored and tap on it, and then tap on Play. That’s it, your video will start playing immediately, no need to wait for it to fully download, everything is streamed.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-CA;}

If you need to play an audio file you can follow the same steps, no 3rd party player required.

Check out our latest "how-to" video to see it in action.

You can always find the Oracle Documents mobile apps in the App stores.

Normal 0 false false false EN-US X-NONE X-NONE -"/> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-CA;}

Short and sweet this time round, next time we’ll explore changes we’ve made that make it super simple to upload all the great assets you create on your mobile device into Oracle Documents Cloud. In the meantime, if you haven't already, simply get started on your free Oracle Documents Cloud Service trial today at https://cloud.oracle.com/documents.



Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-CA;}

Oracle CUME_DIST Function with Examples

Complete IT Professional - Thu, 2016-11-17 05:00
In this article, I’ll cover what the Oracle CUME_DIST function is and show you some examples. Purpose of the Oracle CUME_DIST Function The purpose of the CUME_DIST function is to find the cumulative distribution of a value in a list of values. The value that is returned will be between 0 and 1. The function […]
Categories: Development

Common OBIEE Implementation Mistakes That Impact Performance

Rittman Mead Consulting - Thu, 2016-11-17 04:19

I've recently performed several assessments to help our clients diagnose and fix their OBIEE performance problems. They often share similar design or implementation issues, and I thought it would be useful to present them here as a handy reference:

Mistake 1: No Software Monitoring

The key to happy users is to know there’s a problem before they do, and even better, fix it before they realise. Availability of comprehensive performance metrics is vital to achieve this goal, and Usage Tracking data represent the natural starting point. When enabled in OBIEE, Usage Tracking collects statistics for each query and is helpful to understand usage patterns across dashboards and users, and to analyse performance trends and diagnose specific issue. Whilst Usage Tracking is generally always enabled and data it generates reviewed regularly, other software metrics are not gathered at all or are not available to the BI team, giving only a partial view of the picture.

In addition to gathering and monitoring Usage Tracking data, we would strongly recommend:

  • Proper OS monitoring to capture full breadth of OS metrics on both application and database servers including CPU %, Load Average, Memory, Disk I/O, Disk Busy %, Network I/O. Store historic data for trend analysis with appropriate aggregations (so as to not blunt peaks), and near-realtime granular data for interactive performance diagnostics.

  • OBIEE Dynamic Monitoring Service (DMS) metrics collection. These metrics provide information regarding Oracle Fusion Middleware components' performance, state, and on-going behaviour.

    OBIEE Performance Analytics Dashboards

All of these can be done and visualised with Rittman Mead OBIEE Performance Analytics Dashboards. Have a look at these videos to see exactly how the dashboards can help, and get in touch with us to request a demo!

Mistake 2: Randomly Changing Configuration Settings

It’s a common belief, or hope depending on available hardware, among online gamers that by changing configuration settings you can fix any software performance issue. Unfortunately OBIEE is not a game and this is not true! Whilst OBIEE does not provide any Boost Performance switch, the defaults are generally a good starting point, and unnecessary fiddling without good reason should be avoided:

  • You can introduce additional problems by mistake.

  • It makes it hard to track what's changed from a default config.

  • If it fixes the problem, could be coincidence - no actual understanding of the underlying problem without proof of it.

We at Rittman Mead are firm believers in a practical and logical approach to performance analytics and optimisation. Eschewing the drunk man anti-method of ‘tuning’ configuration settings at random, we advocate making a clear diagnosis and baseline of performance problems before changing anything. Once a clear understanding of the situation is established, steps are taken in a controlled manner to implement and validate one change at a time.

Mistake 3: Data Transformation at Query Time

You can usually find a lot of data transformation being done in the RPD itself. This is one of the most useful functionality of OBIEE, since data re-modelling in-flight at query time is usually a lot quicker to implement rather than changing the underlying data model and / or ETL processes.

Wrangling data this way in a small number of instances is common, but widespread use is indicative of a data model and / or ETL design that needs modifying to take into account the report requirements, it's difficult to maintain, and it results in less efficient SQL, as well as unnecessary complex SQL.

  • Conditional transformation - Business is always changing and it may require to group data in a way that was not expected when data model has been designed. For example, you can create a logical column using CASE WHEN statement to define a new country grouping which is not present in your data model.

    Conditional transformation example

  • Data cleansing - Sometimes you may need to integrate social networks data into your corporate reports, but you have limited or no control over these contents. For example, you can use the TRIM function to deal with blanks that are expected in a column containing users' reviews for your products.

    Data cleansing example

  • Data conversion - The classic use case for data conversion is when you have to manipulate dates to provide the required format. For example, you can use CAST and Calendar/Date functions to create the format YYYY-MM, required to define the Month level's key of your Date hierarchy, from a date column.

    Data conversion example

  • Opaque views - An opaque view is a physical layer table that consists of a SELECT statement. The typical use case for opaque views is when you have to turn a transactional data model into a star schema. However opaque views mask the true data structure from OBIEE, preventing it from best generating optimal data access queries, and they should be used only when there are no other available solutions.

We recommend to do this kind of transformations once, at ETL time, rather than every time a particular table or column is used in a query.

Mistake 4: Underuse of Double Column Feature

Double Column is probably one of the most undervalued and unused features of OBIEE. It provides a mechanism for associating two logical columns: one column provides the display and description values such as the description of an item; the second column provides a descriptor ID or code column.

Use should be made of the Double Column feature to enable OBIEE to generate more efficient database physical SQL. The principle behind this is that instead of queries running using string-based predicates which may not be indexed in the database or could be less efficient, the user sees the string version of an attribute whilst OBIEE uses the corresponding numerical ID column in the generated query when it queries the database, thus generally creating more optimal execution plans.

As an example, to do this we start with assigning the P0 Product Number column as the descriptor ID column in the Business Model and Mapping layer for P1 Product column:

Double column feature setup example

Let's now create a simple report in Answers containing T05 Per Name Year and 1- Revenue columns, and apply a filter on P1 Product as well. If you are selecting is equal to / is in, is not equal to / is not in or is between operators, then filtering by code values checkbox will be available:

Filter by code values example

Let's check it, and then see, in terms of SQL, how the query filter is generated:

select sum(T216.Revenue) as c1,  
     T795.PER_NAME_YEAR as c2
from  
     BISAMPLE.SAMP_PRODUCTS_D T451 /* D10 Product (Dynamic Table) */ ,
     BISAMPLE.SAMP_TIME_QTR_D T795 /* D03 Time Quarter Grain */ ,
     BISAMPLE.SAMP_REVENUE_FA2 T216 /* F21 Rev. (Aggregate 2) */ 
where  ( T216.Prod_Key = T451.Prod_Key  
         and T216.Prod_Key = 7
         and T216.Bill_Qtr_Key = T795.QTR_KEY
         and T451.Prod_Key = 7 ) 
group by T795.PER_NAME_YEAR  

Double Column feature can also be used to define language independent filters as my former colleague Venkat already noted in another post.

Mistake 5: Suboptimal Report Design

Certain suboptimal report designs that I've seen recently include overcrowded analyses, abuse of table / pivot prompts, and OBIEE used as a data extraction tool.

Overcrowded analyses / Overuse of the Excluded area in Views

An overcrowded analysis is an analysis with an awful lot of columns included in Criteria tab and several different views with many excluded columns. These views can then be displayed all together or spread out different dashboard pages.

This pattern is quite common with clients that migrated from Hyperion Interactive Reporting and the main motivation for it - there is only one analysis to maintain - may sound good. However, since Oracle BI Server retrieves results for all columns specified in Criteria tab, we recommend to create several analyses with a single view rather than adopt this pattern. In this way the BI Server generates more efficient queries and avoids on the fly data aggregation.

As an example, you can use Oracle Sample Application v607 to create and run an analysis with a Line-Bar view as follows. Note the huge number of columns in the Excluded area.

Overcrowded analysis example

This will results in a simple chart with 3 bars and a line connecting 3 points:

Line-Bar view example

Let's open obis1-query.log log file and have a look at the physical SQL generated by the BI Server:

select sum(T418.Units) as c1,  
     sum(T418.Revenue) as c2,
     T762.Name as c3,
     T666.Office_Dsc as c4,
     T666.Company as c5,
     T418.Order_Status as c6,
     T451.Prod_Dsc as c7,
     T451.Brand as c8,
     T437.Employee_Key as c9,
     T42.Calendar_Date as c10,
     T42.Per_Name_Year as c11,
     T666.Office_Key as c12,
     T762.Cust_Key as c13,
     T451.Prod_Key as c14,
     T666.Company_Key as c15
from  
     BISAMPLE.SAMP_PRODUCTS_D T451 /* D10 Product (Dynamic Table) */ ,
     BISAMPLE.SAMP_TIME_DAY_D T42 /* D01 Time Day Grain */ ,
     BISAMPLE.SAMP_CUSTOMERS_D T762 /* D60 Customers */ ,
     BISAMPLE.SAMP_OFFICES_D T666 /* D30 Offices */ ,
     BISAMPLE.SAMP_EMPL_D_VH T437 /* D50 Sales Rep (Parent Child Hierarchy) */ ,
     BISAMPLE.SAMP_EMPL_PARENT_CHILD_MAP T490 /* D51 Closure Table Sales Rep Parent Child */ ,
     BISAMPLE.SAMP_REVENUE_F T418 /* F10 Billed Rev */ 
where  ( T42.Calendar_Date = T418.Bill_Day_Dt and T418.Cust_Key = T762.Cust_Key and T418.Prod_Key = T451.Prod_Key and T418.Empl_Key = T490.Member_Key and T418.Office_Key = T666.Office_Key and T437.Employee_Key = T490.Ancestor_Key )  
group by T42.Calendar_Date, T42.Per_Name_Year, T418.Order_Status, T437.Employee_Key, T451.Prod_Key, T451.Prod_Dsc, T451.Brand, T666.Company_Key, T666.Office_Dsc, T666.Company, T666.Office_Key, T762.Name, T762.Cust_Key)  

The physical SQL code doesn't look as simple as the resulting view! But the worst is yet to come: it will take about 7 seconds to complete and 256k rows (1.5 GB) will be retrieved from the database!

Let's remove all the excluded columns from the analysis and see what happens to the physical SQL:

select sum(T216.Units) as c1,  
     sum(T216.Revenue) as c2,
     T795.PER_NAME_YEAR as c3
from  
     BISAMPLE.SAMP_TIME_QTR_D T795 /* D03 Time Quarter Grain */ ,
     BISAMPLE.SAMP_REVENUE_FA2 T216 /* F21 Rev. (Aggregate 2) */ 
where  ( T216.Bill_Qtr_Key = T795.QTR_KEY )  
group by T795.PER_NAME_YEAR  

Definitely far better than it looked before. Moreover it will take about 0.2 seconds - 97% less - to complete and only 3 rows (3 KB) - 99% less - will be retrieved from the database.

Abuse of Table / Pivot prompts

Table / Pivot prompts provides an interactive result set that enables users to select the data that they want to view. However Table prompts doesn’t apply any WHERE condition to the query issued by the BI Server: it will retrieve the full result set first, and then select the data to display on the fly. For this reason we recommend to use Dashboard prompts rather than Table / Pivot prompts.

As an example, you can create and run an analysis with a Pivot Table view as follow:

Pivot prompts example

This will results in a pivot table with 8 rows and 25 columns:

Pivot Table view example

Again, let's open obis1-query.log log file and have a look at the physical SQL generated by the BI Server:

select sum(T418.Units) as c1,  
     sum(T418.Revenue) as c2,
     T666.Company as c3,
     T451.Prod_Dsc as c4,
     T451.Brand as c5,
     T653.Per_Name_Month as c6,
     T653.Per_Name_Year as c7,
     T666.Company_Key as c8,
     T451.Prod_Key as c9
from  
     BISAMPLE.SAMP_PRODUCTS_D T451 /* D10 Product (Dynamic Table) */ ,
     BISAMPLE.SAMP_TIME_MTH_D T653 /* D02 Time Month Grain */ ,
     BISAMPLE.SAMP_OFFICES_D T666 /* D30 Offices */ ,
     BISAMPLE.SAMP_REVENUE_F T418 /* F10 Billed Rev */ 
where  ( T418.Prod_Key = T451.Prod_Key and T418.Bill_Mth_Key = T653.Mth_Key and T418.Office_Key = T666.Office_Key )  
group by T451.Prod_Key, T451.Prod_Dsc, T451.Brand, T653.Per_Name_Year, T653.Per_Name_Month, T666.Company_Key, T666.Company  

As expected no WHERE condition based on selected values in pivot prompts was applied to the query. It will take about 0.3 seconds to complete and 2k rows (10 MB) will be retrieved from the database.

Now let's replace pivot prompts with dashboard prompts and see what happens to the physical SQL:

select sum(T418.Units) as c1,  
     sum(T418.Revenue) as c2,
     T451.Prod_Dsc as c3,
     T653.Per_Name_Month as c4,
     T451.Prod_Key as c5
from  
     BISAMPLE.SAMP_PRODUCTS_D T451 /* D10 Product (Dynamic Table) */ ,
     BISAMPLE.SAMP_TIME_MTH_D T653 /* D02 Time Month Grain */ ,
     BISAMPLE.SAMP_OFFICES_D T666 /* D30 Offices */ ,
     BISAMPLE.SAMP_REVENUE_F T418 /* F10 Billed Rev */ 
where  ( T418.Prod_Key = T451.Prod_Key and T418.Bill_Mth_Key = T653.Mth_Key and T418.Office_Key = T666.Office_Key and T451.Brand = 'BizTech' and T653.Per_Name_Year = '2013' and T666.Company = 'Genmind Corp' )  
group by T451.Prod_Key, T451.Prod_Dsc, T653.Per_Name_Month  

It looks better than before, huh? The optimised code will take about 0.1 seconds - 67% less - to complete and 96 rows (0.2 MB) - 95% less - will be retrieved from the database.

OBIEE as a data extraction tool

A large number of clients are using OBIEE mostly as a feed for Excel-marts. Using OBIEE simply as a data extraction tool for lots of data into Excel or other destinations can cause both performance and functional problems. OBIEE is designed to be primarily used as a reporting and analytics tool, with users interacting with the data within OBIEE and, as my colleague Francesco Tisiot already noted in another post, almost any transformation doable in Excel can be achieved in OBIEE, probably faster and with zero impact on local workstations.

This behaviour could highlight a lack of end user training and / or communication between them and BI developers. End users may don't know OBIEE at all, but are used to working with Excel: at the end of the day people just want their job done as painlessly as possible. We recommend to talk to your end users about what they're doing with the data in their destination system, and whether the same can be achieved within OBIEE. If users really do just need the data in Excel, then you should look at how best to do it (Oracle Doc ID 1558070.1).

Conclusion

I've shown here some of the pitfalls that are commonly made with OBIEE. They can impact the performance for end-users, as well as making life more difficult for those building and maintaining the reports.

If you'd like Rittman Mead to take a look at your OBIEE implementation and make sure you're getting the best out of it, please do get in touch!

Categories: BI & Warehousing

Diff BW Optimizer SQL Engine

Tom Kyte - Thu, 2016-11-17 03:06
Hi, I would like to know the difference Between Optimizer,SQL Engine and PL/SQL Engine? I asked this question So many times But I did not get answers from you. Thank you
Categories: DBA Blogs

Oracle Database Migration from HP-UX 64bit (Big-endian) to RHEL 64bit (Little-endian)

Tom Kyte - Thu, 2016-11-17 03:06
Hi Tom, We are planning to migrate Oracle Database from HP-UX 64bit (Big-endian) to RHEL 64bit (Little-endian). My question is can we migrate database from Big-endian format to Little-endian format using RMAN convert command? (like converting d...
Categories: DBA Blogs

comparing two databases, put result into new table

Tom Kyte - Thu, 2016-11-17 03:06
Hi Tom Thank you for having this service available I'm trying to create an application that will allow me to compare data from two tables by a common ID I would like to either: put the results (non matches) into a new table OR remove th...
Categories: DBA Blogs

cursor with dynamic sql

Tom Kyte - Thu, 2016-11-17 03:06
Hi , I need to write a procedure which has a cursor that stores the list of table names. We need to loop the cursor and for each table name it should look for a code in that particular table. The final output of the procedure should be the list of ta...
Categories: DBA Blogs

Track users who make updates to a EBS database table from backend

Tom Kyte - Thu, 2016-11-17 03:06
Hi, We have custom triggers on a few Oracle EBS tables to track & log any insert, update & delete. The trigger captures any changes made through the front end screens. However, if a user does an insert, update or delete to a table in the back end ...
Categories: DBA Blogs

Spool is starting new line after 500 characters

Tom Kyte - Thu, 2016-11-17 03:06
I've got Spool working in that the output from a query is returning everything, but after 500 characters, it starts a new line. So if if I had a line that was 502 characters ending with 'bananasarecool', then it would look something along the lin...
Categories: DBA Blogs

RMAN backup

Tom Kyte - Thu, 2016-11-17 03:06
Hi,guy! I use RMAN to backup the database on Nov 13 through the COMMVAULT software,but on Nov 15,the oracle server CPU is high,then I connect to the server and check,found that the wait event is 'RMAN backup&rcovery I/O',and the logon time is the ...
Categories: DBA Blogs

Update query in after insert trigger

Tom Kyte - Thu, 2016-11-17 03:06
Hello Tom- my requirement is When a record is inserted into ERR table, a post-event trigger will fire updating the D field to accept sys date (3 columns has to update in my original requirement). I've written the below code, however after the recor...
Categories: DBA Blogs

Last DML Operation timestamp on Production

Tom Kyte - Thu, 2016-11-17 03:06
Hi, I need to get timestamp of last DML operation performed on table from Production. I tried using - SELECT SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN)) from mytablename; SELECT MAX(SCN_TO_TIMESTAMP(ORA_ROWSCN)) from mytablename; Non of them worked...
Categories: DBA Blogs

Oracle Enterprise Manager 12.1 Support Dates Changed

Steven Chan - Thu, 2016-11-17 02:08

Two important changes have been made to Oracle Enterprise Manager 12.1 Lifetime Support dates:

  1. Extended Support fees have been waived for the first year up to October 31, 2017
  2. Extended Support now ends on October 31, 2020 (one extra year of Extended Support)

These changes are published here:

Impact on EBS customers

These support changes apply to E-Business Suite customers using Application Management Pack released as part of Oracle Application Management Suite for Oracle E-Business Suite. 

EBS customers should consider upgrading to Enterprise Manager 13c and AMP 13.1 or higher:


Categories: APPS Blogs

Playing with SUBSTR and INSTR

Michael Dinh - Thu, 2016-11-17 01:18
hawk:(SYS@hawk):PRIMARY> r
  1  SELECT handle,
  2  SUBSTR(handle, INSTR(handle,'/',-1)+1)                extract_last_field,
  3  SUBSTR(handle, 1, (INSTR(handle,'/',-1,1)-1))         remove_last_field,
  4  SUBSTR(handle, 1, (INSTR(handle,'/',1,2))-1)          extract_first_field,
  5  SUBSTR(handle, INSTR(handle,'/',1,2), length(handle)) remove_first_field
  6* FROM v$backup_piece_details

HANDLE
--------------------------------------------------------------------------------
EXTRACT_LAST_FIELD
--------------------------------------------------------------------------------
REMOVE_LAST_FIELD
--------------------------------------------------------------------------------
EXTRACT_FIRST_FIELD
--------------------------------------------------------------------------------
REMOVE_FIRST_FIELD
--------------------------------------------------------------------------------
/oradata/backup/HAWK_3183859104_20161116_24rl3690_1_1.bsu
HAWK_3183859104_20161116_24rl3690_1_1.bsu
/oradata/backup
/oradata
/backup/HAWK_3183859104_20161116_24rl3690_1_1.bsu

/oradata/backup/HAWK_3183859104_20161116_1vrl35ev_2_1.bkp
HAWK_3183859104_20161116_1vrl35ev_2_1.bkp
/oradata/backup
/oradata
/backup/HAWK_3183859104_20161116_1vrl35ev_2_1.bkp

hawk:(SYS@hawk):PRIMARY> r
  1  SELECT name,
  2  SUBSTR(name, INSTR(name,'/',-1)+1)              extract_last_field,
  3  SUBSTR(name, 1, (INSTR(name,'/',-1,1)-1))       remove_last_field,
  4  SUBSTR(name, 1, (INSTR(name,'/',1,2))-1)        extract_first_field,
  5  SUBSTR(name, INSTR(name,'/',1,2), length(name)) remove_first_field
  6* FROM v$datafile

NAME
--------------------------------------------------------------------------------
EXTRACT_LAST_FIELD
--------------------------------------------------------------------------------
REMOVE_LAST_FIELD
--------------------------------------------------------------------------------
EXTRACT_FIRST_FIELD
--------------------------------------------------------------------------------
REMOVE_FIRST_FIELD
--------------------------------------------------------------------------------
/oradata/SGN_HAWK/datafile/o1_mf_system_d00pchld_.dbf
o1_mf_system_d00pchld_.dbf
/oradata/SGN_HAWK/datafile
/oradata
/SGN_HAWK/datafile/o1_mf_system_d00pchld_.dbf

/oradata/SGN_HAWK/datafile/o1_mf_sysaux_d00pdw4l_.dbf
o1_mf_sysaux_d00pdw4l_.dbf
/oradata/SGN_HAWK/datafile
/oradata
/SGN_HAWK/datafile/o1_mf_sysaux_d00pdw4l_.dbf

hawk:(SYS@hawk):PRIMARY>

Improving performance of top query

Bobby Durrett's DBA Blog - Wed, 2016-11-16 16:56

I’m on call this week. Here are the steps that I took to speed up a query today.

First I got an AWR report and found the top query. Also, someone from support told me to look at November 11 before the latest release and I found a similar top query.

I got a plan for both the new and old top queries – they both use the same plan which makes me think that the two queries are similar.

I looked at the top segments on the AWR report and found a particular table at the top of the logical reads. An index of that table was like number 5 on that list.

I looked at the columns of the table’s index to see how many distinct values there were. None of the three columns had more than 300 distinct values so they were not very selective. I noticed that there was a unique index on the table and the first column of that index had millions of distinct values.

I extracted some sample bind variable values for the query and find that the second bind variable was null or something like that. But, the index we were using included this second variable.

In looked at the bind variables and found that the first column from the unique index was part of the join conditions in the query. (The query had like 20 joins).

Then I extracted the query text and replaced the bind variables with literals to see how it would run. It used the unique index. I used hints to force the original index and compared to running with the unique index. It ran about 30 times faster with the unique index. I ran a few times to make sure it was all cached.

Then I tried to use SQLT’s coe_xfr_sql_profile.sql to force the plan that used the unique index but got an error. Had to download the latest version of SQLT to get it to work.

Now, on average, the query seems to run about 1000 times faster.

It is a delivered vendor package so it was nice to find the better plan and go behind the scenes to fix it. But, if another release comes out and changes this sql to a new sql_id we will have to create a new profile. It’s not perfect but its a good quick fix for my on call.

Bobby

Categories: DBA Blogs

Building Classrooms in the Cloud

Steve Karam - Wed, 2016-11-16 16:22
Jumpbox Lab Server

Let’s face it: education without interaction is about as effective as shouting origami instructions at a lumberjack who is cutting down trees. Sure, your informative lessons will come in handy when the product of their work finally becomes paper, but it will be long forgotten and ultimately worthless by then. The only way a student is going to learn is if they can put their hands on what you’re teaching, walk the journey with you, and attach a positive interactive experience to the lesson.

A little over a year ago I was trying to solve this interaction problem. As Director of the Customer Education team at Delphix, I wanted to provide an interactive experience during classroom training that:

  • gives each student their own full lab stack
  • makes labs available worldwide
  • is easy to create, manage, and tear down
  • can be built upon over time
  • is intuitive for learners to access and perform their labs
  • is cost effective

I couldn’t find a product that did all of these things in the way I needed; and so, I decided to build my own. It started as a python command line package and evolved into a web application and a huge suite of features. In this post, I’d like to provide an introduction to the LabAlchemy application, its overall features, and how it is used. I’ll be going over the tools used to build it in detail in later posts.

Introducing LabAlchemy – Classroom Management in the Amazon Web Services (AWS) Cloud

In order to fulfill bullets #1-4 above, it was clear the platform would need to be built in the cloud. Gone are the days of empty rooms full of servers that students will VNC into, or even using your own laptop as your test bed. These things worked well, but they’re not intuitive and difficult for everyone to work with. It is absolutely critical that your solution be intuitive for the learner, so they don’t fall down a rabbit hole trying to get into the labs in the first place.

And so, I decided to build the platform on top of Amazon Web Services, a rugged and incredibly diverse set of services for building the equivalent of your own datacenters in the cloud. LabAlchemy itself is composed of:

  • The LabAlchemy central server that hosts the command line interface (written in python), configuration files (in YAML), web application (written in node.js), metadata repository (in mongoDB), and classroom routing (nginx, more on this later)
  • Amazon Machine Images (AMIs) for our various classroom components like Delphix Engines, source and target servers (Linux and Windows), and the ever important “jumpbox” (more on this later too)
  • An automated set of Virtual Private Clouds (VPC), subnets, security groups, EC2 instances, EBS volumes, and copious amounts of tags to keep everything tidy and well orchestrated.

In summary, what LabAlchemy gives us is this: classroom labs on demand, at the push of a button. Each lab can have a number of students, each student gets their very own lab server and a complete Delphix stack to perform their labs. Labs are spun up in Amazon cloud, segregated into their own VPCs for easy network management, and can be stopped, started, and terminated at will.

LabAlchemy in Action

Creating labs can be done using the Command Line Interface (CLI) or web application. Let’s look at the web app. It’s more fun.

Choosing a Classroom Type

There are a few things going on behind the scenes here; most notably logging in, which is done with SSO for Delphix employees. Classroom TypesOnce a user is logged in and given permissions by my team, they are able to spin up lab environments. The first step is for them to choose a classroom type.

These classroom types are defined in a YAML file that bundles different AMIs into pre-created classrooms. All of the details about each classroom type you see on this screen are baked into the YAML file. As you can see here, we have classrooms with different versions of Delphix (laid out as tabs across the top) and different configurations like Oracle, MSSQL Server, Sybase, etc.

Class Details

Lab DetailsOnce you’ve picked a classroom type you have to enter details about the class including: classroom name, number of students, and cost center (for internal accounting). Once the details are entered and confirmed, the job can be submitted.

Class Creation

Once the job is submitted, LabAlchemy allows you to quickly view the job log. Here is a sample job log, showing this classroom being spun up for 5 students in the “alchemy” classroom. As you can see, LabAlchemy automates:Job Log

  • Configuration of a VPC for the classroom
  • Subnet setup
  • Internet Gateway setup
  • Servers for each student based on the AMI specifications

The log is updated in real time using a feature called WebSockets. We’ll cover that in a future post. Once the log is complete, LabAlchemy waits for the instances to come online, executes any final steps, and deems the classroom setup finished.

Viewing/Managing the Classroom

Current ClassroomsOnce a classroom has been created, you can manage it from the “Current Classrooms” page. I’ve set it up to only show my classrooms (3 out of 16 total right now). Each classroom is displayed with a small status box to the right showing whether it is started, stopped, creating, etc. The “alchemy” classroom details are displayed here. There are also a number of options for the administrator:

  • View – Snapshots of each student’s lab server are taken once a minute so instructors can monitor usage and progress
  • Direct Jumpbox Links – Easy access by IP address to student desktops (more in a moment)
  • Direct Delphix Links – Easy access by IP address to Delphix Engines in each student lab environment to use if the student desktop is not sufficient
  • Start/Stop/Terminate – Controls to stop, start, or terminate the servers in AWS, providing cost savings when labs are stopped, and easy self-service controls.
Accessing Labs

Now we can finally get to the fun part: accessing the labs! Jumpbox Lab ServerAgain, it’s very important to provide a clean and intuitive interface for your learners. To that end, I decided to design the idea of a student lab desktop, or jumpbox. This server is a Linux system running Ubuntu and xfce4 and a theme that more or less looks like Windows. It includes Chrome, terminal, putty, SQL Developer, Remmina RDP client, and a number of other tools to help students complete their work.

The best part though, is how they’re accessed. On the backend, LabAlchemy configures an nginx reverse proxy, which basically forwards their connection on based on a URL scheme. In this case, the scheme is http://classname.labalchemyhost.com/studentnum, where “classname” is the classroom name given on creation, “studentnum” is the student’s number (assigned after creation), and “labalchemyhost.com” is the [redacted] hostname for LabAlchemy itself. You simply go to that URL in your browser, enter the username and password provided by the instructor, and the whole lab server is available to you via HTML5 in your browser. No Flash, no plugins, just easy VNC over HTML5. For example, you can see the desktop for Student #3 in this section.

I didn’t invent this capability. I’m using an amazing open source software package called guacamole, which allows for VNC, RDP, or SSH over HTML5 with a wonderful set of additional features. In LabAlchemy, I orchestrated the configuration of guacamole for all jumpboxes, and provide access to it over the nginx reverse proxy using WebSockets. What you see is the result: a fully functional desktop in your web browser.

Neat! What Else?

As you can see from the previous section, the goal was to make it easy to access these labs. No specific software requirements (like a VNC client), no complex changing IP address to take down. Just a browser and a standardized URL. But I also took that kind of ease of use functionality a bit further with some additional features:

  • Guacamole allows connection sharing. This means that more than one person can be on a desktop at a time. In a classroom setting, that allows for students to work on labs, and other students or an instructor to help them out if needed.
  • All servers assigned to a student (Delphix Engine, Linux servers, etc.) are accessed using an easy to follow IP scheme: 10.0.studentnumber.suffix. “Studentnumber” is the number the student was given prior to class. Suffix depends on the system; Delphix Engines are at .10, Linux source servers on .20, Linux Targets on .30 and .40, and so on. This makes it so students always get a consistent experience from their labs, every time.
  • Lab Guides! This is for Delphix Education, remember? By deploying Lab Guides to these systems, students are able to work through a number of activities right from their jumpbox.
  • Carepackages, just like in Hunger Games. LabAlchemy was designed to bring up pre-created labs out of systems saved in Amazon as AMIs. In order to make modifications, add extra features, etc. I designed a system of private/public keys and deployable carepackages, which allow for custom instructions, files, etc. following classroom launch.
  • Mobile! Since the LabAlchemy app was created with a mobile friendly graphics library (Twitter Bootstrap 3), and the jumpboxes are displayed with HTML5, you can spin up/tear down classrooms and take labs with an iPad or other tablet. You can do it on a phone too, but man that’s tiny.

That’s it for today! We’ve been using this system internally for a little over a year to provide training, do internal testing, development, and all manner of demos and learning workshops. At its peak we’ve had over 550 servers up at a single time. The best part: on average, the cost of running labs is on average about $0.75 per student per hour. The beauty of the burstable cloud.

Join me next time for more of a deep dive on the technology behind LabAlchemy: python, node.js, guacamole, and more.

The post Building Classrooms in the Cloud appeared first on Oracle Alchemist.

Import APEX apps now easier with SQLcl

Kris Rice - Wed, 2016-11-16 14:28
Hopefully by now everyone knows there's the ability in SQLcl to use Javascript and the same javascript can be used to make custom commands extending what SQLcl offers out of the box. Also, I hope everyone knows we are posting examples of this to github as a base for anyone to learn and try it out.  Just in case here's the link to the GitHub location https://github.com/oracle/oracle-db-tools/tree

Pages

Subscribe to Oracle FAQ aggregator