Feed aggregator

Monitoring Oracle Database with Zabbix

Gerger Consulting - Mon, 2016-03-14 08:14

Attend our free webinar and learn how you can use Zabbix, the open source monitoring solution, to monitor your Oracle Database instances? The webinar is presented by Oracle ACE and Certified Master Ronald Rood.


About the Webinar:

Enterprise IT is moving to the Cloud. With tens, hundreds even thousands of servers in the Cloud, monitoring the uptime, performance and quality of the Cloud infrastructure becomes a challenge that traditional monitoring tools struggle to solve. Enter Zabbix. Zabbix is a low footprint, low impact, open source monitoring tool that provides various notification types and integrates easily with your ticketing system. During the webinar, we'll cover the following topics:

  • Installation and configuration of Zabbix in the Cloud
  • Monitoring Oracle databases using Zabbix
  • How to use Zabbix templates to increase the quality and efficiency of your monitoring setup
  • How to setup Zabbix for large and remote networks
  • How to trigger events in Zabbix
  • Graphing with Zabbix
  • Categories: Development

    ORDS and PL/SQL

    Kris Rice - Mon, 2016-03-14 07:56
    Seems I've never posted about PL/SQL based REST endpoints other than using the OWA toolkit.  Doing the htp.p manually can give the control over every aspect of the results however there is an easier way. With PL/SQL based source types, the ins and outs can be used directly without any additional programming.  Here's a simple example of an anonymous block doing about as little as possible but

    Oracle Mobile Cloud Service Update (v1.2): New Features and Enhancements

    Oracle Mobile Cloud Service (MCS) provides the services you need to develop a comprehensive strategy for mobile app development and delivery. It provides everything you need to establish an...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    KeePass 2.32

    Tim Hall - Mon, 2016-03-14 06:33

    KeePass 2.32 has been released. You can download it from here.

    You can read about how I use KeePass and KeePassX2 on my Mac, Windows and Android devices here.

    Cheers

    Tim…

    KeePass 2.32 was first posted on March 14, 2016 at 12:33 pm.
    ©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

    PeopleSoft on the Oracle Cloud – what does it mean?

    Duncan Davies - Mon, 2016-03-14 06:00

    There have been a few announcements over the last couple of weeks about the Oracle Public Cloud. But what does it actually mean for the PeopleSoft community?

    What is Oracle Public Cloud?PeopleSoft in the Oracle Cloud

    The Oracle Public Cloud is Oracle’s competitor to the Infrastructure as a Service (IaaS) providers that have swiftly risen to create a whole industry that didn’t exist 10 years ago. Because they’re the market leader (by far) everyone automatically thinks of Amazon, however Microsoft Azure, Google Compute and Rackspace are also players in the market.

    As PeopleSoft adopts more SaaS-like features (new UI, incremental updates etc) companies have started to move their infrastructure from their own data-centres to the cloud. For many companies this makes good business sense, however rather than have customers going to a 3rd party provider Oracle would rather provide the cloud service themselves. Obviously this is better for Oracle, however the customer benefits too (retaining a single vendor, and Oracle can potentially optimise their applications for their own cloud better than they can for Cloud infrastructure belonging to other vendors). There may also be cost savings for the customer, however I haven’t looked at pricing yet.

    Doesn’t Oracle already do Hosting?

    Yes, Oracle has long had a service that will host infrastructure on your behalf – Oracle On Demand. This is more of an older-style ASP (Application Service Provider). You’re more likely to be on physical hardware without much in the way of flexibility/scalability and tied into a long-term hosting contract, so the Oracle Public Cloud is a major step forwards in a number of ways.

    How will Oracle Public Cloud be better?

    I attended a couple of workshops on this last week and it looks very promising. It has all the attributes required for it to be properly classed as ‘Cloud’:

    • subscription pricing,
    • elasticity of resources (so you can scale instances according to demand),
    • resilience of data centres (so, if you’re based in the UK you might be looking at the Slough data centre, however there are two ‘availability zones’ within Slough so if one gets hit by an outage you’ll still be able to connect to the other one)

    Interestingly, it also includes several ‘Database as a Service’ offerings, each offering increasing levels of performance. With this model you don’t need to worry about the virtual machine, operation system etc that your database runs on, you receive access to a database and leave the maintenance to others. You would still need to have your other tiers on the IaaS offerings.

    This opens up the possibility of multiple tiers of Cloud service:

    1. Just the Infrastructure (client does all the database and application admin)
    2. DBaaS (client has other tiers on IaaS, but does not do DB admin)
    3. Full Cloud solution (uses Oracle Cloud and a partner to do all administration)
    How can I best take advantage?

    The best time to move is probably at the same time as an upgrade. Upgrades normally come with a change in some of the hardware (due to the supported platforms changing) so moving to the cloud allows the hardware to change without any up-front costs.

    PeopleSoft 9.2 and the more recent PeopleTools versions have a lot of features that were built for the Cloud, so by running it on-premises you’re not realising the full capabilities of your investment.

    We’d recommend you try using the Cloud for your Dev and Test instances first, before leaping in with Production at a later date. Oracle have tools to help you migrate on-premises instances to their Cloud. (At this point – Mar 2016 – we have not tested these tools.)

    What will the challenges be?

    The first challenge is “how do I try it?”. This is pretty straightforward, in that you get a partner to demonstrate to you, or can get yourself an Oracle Public Cloud account and then provision a PeopleSoft instance using one of the PUM images as a demo. This would work fine to look at new functionality, or as a conference room pilot.

    One of the biggest challenges is likely to be security – not the security of Oracle’s cloud, but securing your PeopleSoft instances which previously might have been only available within your corporate LAN. If you need assistance with this speak to a partner with experience using Oracle Public Cloud.


    Oracle Midlands : Event #14

    Tim Hall - Mon, 2016-03-14 05:33

    Tomorrow is Oracle Midlands Event #14.

    om14

    Please show your support and come along. It’s free thanks to the sponsorship by RedStackTech.

    Cheers

    Tim…

    Oracle Midlands : Event #14 was first posted on March 14, 2016 at 11:33 am.
    ©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

    ASO Slice Clears – How Many Members?

    Rittman Mead Consulting - Mon, 2016-03-14 05:00

    Essbase developers have had the ability to (comparatively) easily clear portions of our ASO cubes since version 11.1.1, getting away from fiddly methods involving manually contra-ing existing data via reports and rules files, making incremental loads substantially easier.

    Along with the official documentation in the TechRef and DBAG, there are a number of excellent posts already out there that explain this process and how to effect “slice clears” in detail (here and here are just two I’ve come across that I think are clear and helpful). However, I had a requirement recently where the incremental load was a bit more complex than this. I am sure people must have fulfilled in the same or a very similar way, but I could not find any documentation or articles relating to it, so I thought it might be worth recording.

    For the most part, the requirements I’ve had in this area have been relatively straightforward—(mostly) financial systems where the volatile/incremental slice is typically a months-worth (or quarters-worth) of data. The load script will follow this sort of sequence:

    • [prepare source data, if required]
    • Perform a logical clear
    • Load data to buffer(s)
    • Load buffer(s) to new database slice(s)
    • [Merge slices]

    With the last stage being run here if processing time allows (this operation precludes access to the cube) or in a separate routine “out of hours” if not.

    The “logical clear” element of the script will comprise a line like (note: the lack of a “clear mode” argument means a logical clear; only a physical clear needs to be specified explicitly):

    alter database ‘Appname‘.’DBName‘ clear data in region ‘{[Jan16]}’

    or more probably

    alter database ‘Appname‘.’DBName‘ clear data in region ‘{[&CurrMonth]}’

    i.e., using a variable to get away from actually hard coding the member values to clear. For separate year/period dimensions, the slice would need to be referenced with a CrossJoin:

    alter database ‘Appname‘.’DBName‘ clear data in region ‘Crossjoin({[Jan]},{[FY16]})’ alter database ‘${Appname}’.’${DBName}’ clear data in region ‘Crossjoin({[&{CurrMonth]},{[&CurrYear]})’

    which would, of course, fully nullify all data in that slice prior to the load. Most load scripts will already be formatted so that variables would be used to represent the current period that will potentially be used to scope the source data (or in a BSO context, provide a FIX for post-load calculations), so using the same to control the clear is an easy addition.

    Taking this forward a step, I’ve had other systems whereby the load could comprise any number of (monthly) periods from the current year. A little bit more fiddly, but achievable: as part of the prepare source data stage above, it is relatively straightforward to run a select distinct period query on the source data, spool the results to a file, and then use this file to construct that portion of the clear command (or, for a relatively small number, prepare a sequence of clear commands).

    The requirement I had recently falls into the latter category in that the volatile dimension (where “Period” would be the volatile dimension in the examples above) was a “product” dimension of sorts, and contained a lot of changed values each load. Several thousand, in fact. Far too many to loop around and build a single command, and far too many to run as individual commands—whilst on test, the “clears” themselves ran satisfyingly quickly, it obviously generated an undesirably large number of slices.

    So the problem was this: how to identify and clear data associated with several thousand members of a volatile dimension, the values of which could change totally from load to load.

    In short, the answer I arrived at is with a UDA.

    The TechRef does not explicitly say or give examples, but because the Uda function can be used within a CrossJoin reference, it can be used to effect a clear: assume the Product dimension had an UDA of CLEAR against certain members…

    alter database ‘Appname‘.’DBName‘ clear data in region ‘CrossJoin({Uda([Product], “CLEAR”)})’

    …would then clear all data for all of those members. If data for, say, just the ACTUAL scenario is to be cleared, this can be added to the CrossJoin:

    alter database ‘Appname‘.’DBName‘ clear data in region ‘CrossJoin({Uda([Product], “CLEAR”)}, {[ACTUAL]})’

    But we first need to set this UDA in order to take advantage of it. In the load script steps above, the first step is prepare source data, if required. At this point, a SQLplus call was inserted to a new procedure that

    1. examines the source load table for distinct occurrences of the “volatile” dimension
    2. populates a table (after initially truncating it) with a list of these members (and parents), and a third column containing the text “CLEAR”:

    picture1

    A “rules” file then needs to be built to load the attribute. Because the outline has already been maintained, this is simply a case of loading the UDA itself:

    picture2

    In the “Essbase Client” portion of the load script, prior to running the “clear” command, the temporary UDA table needs to be loaded using the rules file to populate the UDA for those members of the volatile dimension to be cleared:

    import database ‘AppName‘.’DBName‘ dimensions connect as ‘SQLUsername‘ identified by ‘SQLPassword‘ using server rules_file ‘PrSetUDA’ on error write to ‘LogPath/ASOCurrDataLoad_SetAttr.err’;

    picture3

     

    With the relevant slices cleared, the load can proceed as normal.

    After the actual data load has run, the UDA settings need to be cleared. Note that the prepared table above also contains an empty column, UDACLEAR. A second rules file, PrClrUDA, was prepared that loads this (4th) column as the UDA value—loading a blank value to a UDA has the same effect as clearing it.

    The broad steps of the load script therefore become these:

    • [prepare source data, if required]
    • ascertain members of volatile dimension to clear from load source
    • update table containing current load members / CLEAR attribute
    • Load CLEAR attribute table
    • Perform a logical clear
    • Load data to buffers
    • Load buffer(s) to new database slice(s)
    • [Merge slices]
    • Remove CLEAR attributes

    So not without limitations—if the data was volatile over two dimensions (e.g., Product A for Period 1, Product B for Period 2, etc.) the approach would not work (at least, not exactly as described, although in this instance you could possible iterate around the smaller Period dimension)—but overall, I think it’s a reasonable and flexible solution.

    Clear / Load Order

    While not strictly part of this solution, another little wrinkle to bear in mind here is the resource taken up by the logical clear. When initializing the buffer prior to loading data into it, you have the ability to determine how much of the total available resource is used for that particular buffer—from a total of 1.0, you can allocate (e.g.) 0.25 to each of 4 buffers that can then be used for a parallel load operation, each loaded buffer subsequently writing to a new database slice. Importing a loaded buffer to the database then clears the “share” of the utilization afforded to that buffer.

    Although not a “buffer initialization” activity per se, a (slice-generating) logical clear seems to occupy all of this resource—if you have any uncommitted buffers created, even with the lowest possible resource utilization of 0.01 assigned, the logical clear will fail:

    picture4

    The Essbase Technical Reference states at “Loading Data Using Buffers“:

    While the data load buffer exists in memory, you cannot build aggregations or merge slices, as these operations are resource-intensive.

    It could perhaps be argued that as we are creating a “clear slice,” not merging slices (nor building an aggregation), that the logical clear falls outside of this definition, but a similar restriction certainly appears to apply here too.

    This is significant as, arguably, the ideally optimum incremental load would be along the lines of

    • Initialize buffer(s)
    • Load buffer(s) with data
    • Effect partial logical clear (to new database slice)
    • Load buffers to new database slices
    • Merge slices into database

    As this would both minimize the time that the cube was inaccessible (during the merge), and also not present the cube with zeroes in the current load area. However, as noted above, this does not seem to be possible—there does not seem to be a way to change the resource usage (RNUM) of the “clear,” meaning that this sequence has to be followed:

    • Effect partial logical clear (to new database slice)
    • Initialize buffer(s)
    • Load buffer(s) with data
    • Load buffers to new database slices
    • Merge slices into database

    I.e., the ‘clear’ has to be fully effected before the initialization of the buffers. This works as you would expect, but there is a brief period—after the completion of the “clear” but before the load buffer(s) have been committed to new slices—where the cube is accessible and the load slice will show as “0” in the cube.

    The post ASO Slice Clears – How Many Members? appeared first on Rittman Mead Consulting.

    Categories: BI & Warehousing

    MORE Content to Ready You for Oracle Cloud Applications R11

    Linda Fishman Hoyle - Sun, 2016-03-13 20:39

    A Guest Post by Senior Director Louvaine Thomson (pictured left), Product Management, Oracle Cloud Applications

    The previous announcement of Release 11 preview material included:

    Spotlight Videos: Hosted by senior development staff, these webcast-delivered presentations highlight top level messages and product themes, and are reinforced with a product demo.
    Release Content Documents (RCDs): This content includes a summary level description of each new feature and product.

    We are now pleased to announce the next and final wave of readiness content. Specifically, the following content types are now available on the Release 11 Readiness page.

    • What's New: Learn about what's new in the upcoming release by reviewing expanded discussions of each new feature and product, including capability overviews, business benefits, setup considerations, usage tips, and more.

    • Release Training: Created by product management, these self-paced, interactive training sessions are deep dives into key new enhancements and products. Also referred to as Transfers of Information (TOIs).

    • Product Documentation: Oracle's online documentation includes detailed product guides and training tutorials to ensure your successful implementation and use of the Oracle Applications Cloud.


    Access is simple: From the Cloud Site: Click on Support > Release Readiness


    Rename all exported files to their original names after exporting from Oracle database using Oracle SQL Developer’s Shopping Cart

    Ittichai Chammavanijakul - Sun, 2016-03-13 15:08

    If you’re searching for “export Oracle BLOB”, the article, by Jeff Smith, titled “Exporting Multiple BLOBs with Oracle SQL Developer” using Oracle SQL Developer” is usually at the top of the search result. The SQL Developer features the Shopping Cart without using scripts to export BLOBs out of database. I don’t want to go into detail as Jeff already explained well in his post what it is and how to use it. One main issue of using this approach is that sometime you want the actual file names instead of the exported names. This can be overcame easily using a post-run script. I wrote this simple script in Python as it suites well with name manipulation. (I’m not a Python expert, but it is one of programming languages that is very easy to learn.)

    The script is just reply read from the FND_LOBS_DATA_TABLE.ldr file, which contains information about original filename and new exported filename (in the format of FND_LOBS_DATA_TABLExxxxx).

    # Sample data
     1889399|"CF.xlsx"|"application/octet-stream"|FND_LOBS_DATA_TABLE694b44cc-0150-1000-800d-0a03f42223fd.ldr|2014-05-20 12:11:41||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889403|"PriceList_quotation (20 May 2014) cust.xls"|"application/vnd.ms-excel"|FND_LOBS_DATA_TABLE694b4587-0150-1000-800e-0a03f42223fd.ldr|2014-05-20 12:18:02||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889807|"MS GROUP NORTH AMERICA INC1.pdf"|"application/pdf"|FND_LOBS_DATA_TABLE694b4613-0150-1000-800f-0a03f42223fd.ldr|||||"US"|"AL32UTF8"|"binary"|{EOL}

    # 1st = File ID (Media ID)
    # 2nd = Original File Name
    # 4th = Exported File Name
    # The remaining information is not relevant.

    The script separates all information, which is stored in a single line, by string {EOL} into multiple lines. It continues to split into each column based positions. The information we’re interested in is in the 1st, 2nd and 4th position. It then just calls the operating system to rename file.

    The content of the script rename.py as follows:

    
    
    from sys import argv
    import string
    import shutil
    import os
    # Script to rename exported BLOB files from Oracle SQL Developer tool
    #
    # Pre-requisite: Python 3.x https://www.python.org/downloads/
    #
    # Execution:
    # (1) Copy the script to the folder containing mapping file - "FND_LOBS_DATA_TABLE.ldr" and all exported files.
    # (2) Execute the script as follows
    #      C:\> cd deploy
    #      C:\> rename.py FND_LOBS_DATA_TABLE.ldr
    
    # Take parameters
    script, filename = argv
    # Open file in read-only mode
    file = open(filename, 'r', encoding="utf8")
    
    
    # Sample data - everything is stored in one line.
    # 1889399|"EPR - CF.xlsx"|"application/octet-stream"|FND_LOBS_DATA_TABLE694b44cc-0150-1000-800d-0a03f42223fd.ldr|2014-05-20 12:11:41||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889403|"PriceList_quotation_murata (20 May 2014) cust.xls"|"application/vnd.ms-excel"|FND_LOBS_DATA_TABLE694b4587-0150-1000-800e-0a03f42223fd.ldr|2014-05-20 12:18:02||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889807|"MGS GROUP NORTH AMERICA INC1.pdf"|"application/pdf"|FND_LOBS_DATA_TABLE694b4613-0150-1000-800f-0a03f42223fd.ldr|||||"US"|"AL32UTF8"|"binary"|{EOL}
    # 1st = File ID (Media ID)
    # 2nd = Actual/Original File Name
    # 3rd = File Type
    # 4th = Exported File Name
    # The remaining = Not relevant
    
    
    # First, split each by string {EOL} 
    splitted_line = file.read().split('{EOL}')
    
    
    # For each splitted line, split into each word, separated by |
    for s in splitted_line:
     # Split by |
     splitted_word = s.split('|')
     
     # If reaching the last line, which contains only [''], exit the loop.
     if len(splitted_word) == 1:
     break
     
     # The Original file name is in the 2nd word (list position #1)
     # Strip out double quotes and leading & trailing spaces if any
     orig_name = splitted_word[1].strip('"').strip() 
     
     # The Exported file name is in the 4th word (list position #3) 
     exported_name = splitted_word[3].strip() # Strip out leading & trailing spaces if any
     
     # We plan to prefix each file with its unique FILE_ID.
     # This is to avoid file name collision if two or more files have the same name
     # Also, strip out leading & trailing spaces if any
     file_id = splitted_word[0].strip() 
     
     # Rename file
     # Adjust the new file name according to your needs
     os.rename(exported_name, file_id + '_' + orig_name)
    
    

    After unzipping the deploy.zip, which is the default exported file from SQL Developer, copy the rename.py into this unzipped folder.

    C:\> cd deploy
    C:\> dir
    02/23/2016 07:57 PM 2,347 rename.py
    02/23/2016 07:57 PM 34,553 export.sql
    02/23/2016 07:52 PM 1,817 FND_LOBS.sql
    02/23/2016 07:57 PM 276 FND_LOBS_CTX.sql
    02/23/2016 07:57 PM 614 FND_LOBS_DATA_TABLE.ctl
    02/23/2016 07:52 PM 88,193 FND_LOBS_DATA_TABLE.ldr
    02/23/2016 07:57 PM 78,178 FND_LOBS_DATA_TABLE10fa4165-0153-1000-8001-0a2a783f1605.ldr
    02/23/2016 07:57 PM 27,498 FND_LOBS_DATA_TABLE10fa4339-0153-1000-8002-0a2a783f1605.ldr
    02/23/2016 07:57 PM 17,363 FND_LOBS_DATA_TABLE10fa43c5-0153-1000-8003-0a2a783f1605.ldr
    02/23/2016 07:57 PM 173,568 FND_LOBS_DATA_TABLE10ff189d-0153-1000-8219-0a2a783f1605.ldr
    :
    :
    
    
    C:\> rename.py FND_LOBS_DATA_TABLE.ldr
    
    
    C:\> dir
    02/23/2016 07:57 PM 2,347 rename.py
    02/23/2016 07:57 PM 34,553 export.sql
    02/23/2016 07:52 PM 1,817 FND_LOBS.sql
    02/23/2016 07:57 PM 276 FND_LOBS_CTX.sql
    02/23/2016 07:57 PM 614 FND_LOBS_DATA_TABLE.ctl
    02/23/2016 07:52 PM 88,193 FND_LOBS_DATA_TABLE.ldr
    02/23/2016 07:57 PM 78,178 689427_DATACOM SOUTH ISLAND LTD.htm
    02/23/2016 07:57 PM 27,498 698623_lincraft.htm
    02/23/2016 07:57 PM 17,363 772140_275131.htm
    02/23/2016 07:57 PM 173,568 3685533_RE 新办公室地址.MSG
    :
    :
    
    
    Categories: DBA Blogs

    Compression -- 3 : Index (Key) Compression

    Hemant K Chitale - Sun, 2016-03-13 04:34
    Unlike Table Compression that uses deduplication of column values, Index Compression is based on the keys.  Key Compression is also called Prefix Compression.

    This relies on repeated leading key values being eliminated.  Thus, for example, if the leading column of the composite index has frequently repeated values and because an Index is always an organised (sorted) structure, we find the repeated values appearing as if "sequentially".  Key Compression can eliminate the repeated values.

    Thus, it becomes obvious that Index Key Compression is usable for
    a.  A Composite Index of 2 or more columns
    b.  Repeated appearances of values in the *leading* key columns
    c.  Compression defined for a maximum of n-1 columns  (where n is the number of columns in the index).  That is, the last column cannot be compressed.
    Note that a Non-Unique Index automatically has the ROWID appended to it, so Key Compression can be applied to all the columns defined.

    Let's look at a few examples.

    Starting with creating a fairly large table (that is a multiplied copy of DBA_OBJECTS)

    PDB1@ORCL> create table target_data as select * from source_data where 1=2;

    Table created.

    PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

    364496 rows created.

    PDB1@ORCL> commit;

    Commit complete.

    PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

    364496 rows created.

    PDB1@ORCL> commit;

    Commit complete.

    PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

    364496 rows created.

    PDB1@ORCL> commit;

    Commit complete.

    PDB1@ORCL>
    PDB1@ORCL> desc target_data
    Name Null? Type
    ----------------------------------------- -------- ----------------------------
    OWNER VARCHAR2(128)
    OBJECT_NAME VARCHAR2(128)
    SUBOBJECT_NAME VARCHAR2(128)
    OBJECT_ID NUMBER
    DATA_OBJECT_ID NUMBER
    OBJECT_TYPE VARCHAR2(23)
    CREATED DATE
    LAST_DDL_TIME DATE
    TIMESTAMP VARCHAR2(19)
    STATUS VARCHAR2(7)
    TEMPORARY VARCHAR2(1)
    GENERATED VARCHAR2(1)
    SECONDARY VARCHAR2(1)
    NAMESPACE NUMBER
    EDITION_NAME VARCHAR2(128)
    SHARING VARCHAR2(13)
    EDITIONABLE VARCHAR2(1)
    ORACLE_MAINTAINED VARCHAR2(1)

    PDB1@ORCL>


    What composite index is a good candidate for Key Compression ?
    *Not* an Index that begins with OBJECT_ID as that is a Unique value.

    Let's compare two indexes (compressed and non-compressed) on (OWNER, OBJECT_TYPE, OBJECT_NAME).

    PDB1@ORCL> create index target_data_ndx_1_comp on
    2 target_data (owner, object_type, object_name) compress 2;

    Index created.

    PDB1@ORCL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

    PL/SQL procedure successfully completed.

    PDB1@ORCL> select leaf_blocks
    2 from user_indexes
    3 where index_name = 'TARGET_DATA_NDX_1_COMP'
    4 /

    LEAF_BLOCKS
    -----------
    5629

    PDB1@ORCL>


    PDB1@ORCL> drop index target_data_ndx_1_comp
    2 /

    Index dropped.

    PDB1@ORCL> create index target_data_ndx_2_nocomp on
    2 target_data (owner, object_type, object_name) ;

    Index created.

    PDB1@ORCL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_2_NOCOMP');

    PL/SQL procedure successfully completed.

    PDB1@ORCL> select leaf_blocks
    2 from user_indexes
    3 where index_name = 'TARGET_DATA_NDX_2_NOCOMP'
    4 /

    LEAF_BLOCKS
    -----------
    7608

    PDB1@ORCL>


    Note the "compress 2" specification for the first index.  That is an instruction to compress based on the leading 2 columns.
    Thus, the compressed index is 5,629 blocks but the normal, non-compressed index is 7,608 blocks.  We make a gain of 26% in the index size.

    Why did I choose OWNER, OBJECT_TYPE as the leading columns ?  Because I expected a high level of repetition on these column names.


    Note : I have not explored Advanced Index Compression available in 12.1.0.2
    Advanced Index Compression tested in 12.1.0.2
    .
    .

    Categories: DBA Blogs

    UKOUG Application Server & Middleware SIG – Summary

    Tim Hall - Sat, 2016-03-12 08:08

    ukougOn Thursday I did a presentation at the UKOUG Application Server & Middleware SIG.

    As I mentioned in my previous post, I was not able to stay for the whole day. I arrived about 30 minutes before my session was scheduled to start. The previous session finished about 10 minutes early and the speaker following me cancelled, so my 45 minute session extended to about 70 minutes. :)

     

    There had already been speakers focussing on Oracle Cloud and Amazon Web Services (AWS), so I did a live demo of Azure, which included building an Oracle Linux VM and doing an install of WebLogic and ADF. There was also a more general presentation about running Oracle products on the cloud. I’m not a WebLogic or cloud specialist, so this presentation is based on me talking about my experiences of those two areas. Peter Berry from Clckwrk and Paul Bainbridge from Fujitsu corrected me on a couple of things, which was cool.

    After my session I hung around for a quick chat, but I had to rush back to work to do an upgrade, which went OK. :)

    Thanks to the organisers for inviting me and thanks to everyone that came along. It would have been good to see the other presentations, but unfortunately that was not possible for me this time!

    Cheers

    Tim…

    PS. Simon, the preinstall packages were installed in the Oracle Linux templates. :)

    # rpm -qa | grep preinstall
    oracle-rdbms-server-12cR1-preinstall-1.0-8.el6.x86_64
    oracle-rdbms-server-11gR2-preinstall-1.0-7.el6.x86_64
    #
    UKOUG Application Server & Middleware SIG – Summary was first posted on March 12, 2016 at 3:08 pm.
    ©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

    WINDOW NOSORT STOPKEY + RANK()

    XTended Oracle SQL - Fri, 2016-03-11 18:23

    Recently I found that WINDOW NOSORT STOPKEY with RANK()OVER() works very inefficiently: http://www.freelists.org/post/oracle-l/RANKWINDOW-NOSORT-STOPKEY-stopkey-doesnt-work
    The root cause of this behaviour is that Oracle optimizes WINDOW NOSORT STOPKEY with RANK the same way as with DENSE_RANK:

    rnk1

    create table test(n not null) as 
      with gen as (select level n from dual connect by level<=100)
      select g2.n as n
      from gen g1, gen g2
      where g1.n<=10
    /
    create index ix_test on test(n)
    /
    exec dbms_stats.gather_table_stats('','TEST');
    select/*+ gather_plan_statistics */ n
    from (select rank()over(order by n) rnk
                ,n
          from test)
    where rnk<=3
    /
    select * from table(dbms_xplan.display_cursor('','','allstats last'));
    drop table test purge;
    

    [collapse]
    Output
             N
    ----------
             1
             1
             1
             1
             1
             1
             1
             1
             1
             1
    
    10 rows selected.
    
    PLAN_TABLE_OUTPUT
    -----------------------------------------------------------------------------------------------------------------------
    SQL_ID  8tbq95dpw0gw7, child number 0
    -------------------------------------
    select/*+ gather_plan_statistics */ n from (select rank()over(order by
    n) rnk             ,n       from test) where rnk<=3
    
    Plan hash value: 1892911073
    
    -----------------------------------------------------------------------------------------------------------------------
    | Id  | Operation              | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    -----------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT       |         |      1 |        |     10 |00:00:00.01 |       3 |       |       |          |
    |*  1 |  VIEW                  |         |      1 |   1000 |     10 |00:00:00.01 |       3 |       |       |          |
    |*  2 |   WINDOW NOSORT STOPKEY|         |      1 |   1000 |     30 |00:00:00.01 |       3 | 73728 | 73728 |          |
    |   3 |    INDEX FULL SCAN     | IX_TEST |      1 |   1000 |     31 |00:00:00.01 |       3 |       |       |          |
    -----------------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - filter("RNK"<=3)
       2 - filter(RANK() OVER ( ORDER BY "N")<=3)
    

    [collapse]

    As you can see, A-Rows in plan step 2 = 30 – ie, that is the number of rows where

    DENSE_RANK<=3

    but not

    RANK<=3

    The more effective way will be to stop after first 10 rows, because 11th row already has RANK more than 3!
    But we can create own STOPKEY version with PL/SQL:

    PLSQL STOPKEY version
    create or replace type rowids_table is table of varchar2(18);
    /
    create or replace function get_rowids_by_rank(
          n          int
         ,max_rank   int
       ) 
       return rowids_table pipelined
    as
    begin
       for r in (
          select/*+ index_rs_asc(t (n))  */ rowidtochar(rowid) chr_rowid, rank()over(order by n) rnk
          from test t
          where t.n > get_rowids_by_rank.n
          order by n
       )
       loop
          if r.rnk <= max_rank then
             pipe row (r.chr_rowid);
          else
             exit;
          end if;
       end loop;
       return;
    end;
    /
    select/*+ leading(r t) use_nl(t) */
       t.*
    from table(get_rowids_by_rank(1, 3)) r
        ,test t
    where t.rowid = chartorowid(r.column_value)
    /
    

    [collapse]
    In that case the fetch from a table will stop when rnk will be larger than max_rank

    Categories: Development

    See You at SXSW 2016

    Oracle AppsLab - Fri, 2016-03-11 09:59

    sxsw If you happen to be in Austin this weekend for SXSWi, look for Osvaldo (@vaini11a), me (@noelportugal) and friend of the ‘Lab Rafa (@rafabelloni).

    We will be following closely all things UX, IoT, VR, AI. Our schedules are getting full with some great sessions and workshops. Check back in a week or so to read some of our impressions!Possibly Related Posts:

    Changes to DBCA Patch Application Behaviour Causes PDB Cloning to Fail

    Pythian Group - Fri, 2016-03-11 07:23
    Background

    A test upgrade from 11g to 12c and conversion to a container and pluggable database recently pointed out some important 12c behavior differences with respect to the DBCA and whether or not it automatically applies PSUs installed in the Oracle Home.

    The original objective was to take an existing 11.2.0.4 database and upgrade it to 12.1.0.2 and convert it to a PDB.

    From a high level the procedure was:

    • Install the Oracle 12.1.0.2 software and apply the latest PSU (in this case the JAN2016 PSU).
    • Create a new CDB to house the upgraded database.
    • Upgrade the 11.2.0.4 database to 12.1.0.2 in-place using the DBUA.
    • Convert the upgraded 12c database to a PDB (via the clone through DB link method).

    Seemed pretty straight forward. However as part of the PDB conversion (running of the noncdb_to_pdb.sql script), the following error was encountered:

    SQL> DECLARE
      2     threads pls_integer := &&1;
      3  BEGIN
      4     utl_recomp.recomp_parallel(threads);
      5  END;
      6  /
    DECLARE
    *
    ERROR at line 1:
    ORA-04045: errors during recompilation/revalidation of SYS.DBMS_SQLPATCH
    ORA-00600: internal error code, arguments: [kql_tab_diana:new dep], [0x0CF59D0B8], [0x7F1525B91DE0], [1], [2], [], [], [], [], [], [], []
    ORA-06512: at "SYS.DBMS_UTILITY", line 1294
    ORA-06512: at line 1
    

     

    The noncdb_to_pdb.sql script can only be run once so at this point the PDB conversion has failed and must be restarted. But first we must understand what went wrong or what steps we missed.

    Root Cause: DBCA no longer automatically applies PSUs

    It’s obvious from the ORA-04045 error that the issue is related to patching. But the question is still what was missed in the process since the 12c Oracle Home was fully patched before creating or upgrading any databases?

    The problem is that DBAs have maybe become complacent with respect to PSU applications after creating databases. With Oracle Database 11g whenever we created a database via the DBCA, the latest PSU was automatically applied. It doesn’t matter if we created the database from a template or used a custom install. Regardless of which DBCA method was used, after DB creation we’d see something similar to:

    SQL> select comments, action_time from dba_registry_history
      2  where bundle_series like '%PSU' order by 2;
    
    COMMENTS                       ACTION_TIME
    ------------------------------ ------------------------------
    PSU 11.2.0.4.160119            04-MAR-16 02.43.52.292530 PM
    
    SQL>
    

     

    Clearly the latest PSU (JAN2016 in this case) installed in the Oracle Home was applied automatically by the DBCA. And of course this is reflected in the official README documentation (in this example for DB PSU patch 21948347 [JAN2016] – requires a My Oracle Support login to view) which states:

    There are no actions required for databases that have been upgraded or created after installation of PSU 11.2.0.4.160119.

     

    However this functionality has completely changed with Oracle Database 12c! The change in behaviour is documented in My Oracle Support (MOS) Note: “12.1:DBCA (Database Creation) does not execute “datapatch” (Doc ID 2084676.1)” which states:

    DBCA does not execute datapatch in Oracle 12.1.0.X. The solution is to apply the SQL changes manually after creating a new Database

     

    Similarly the 12c JAN2016 DB PSU (patch 21948354) README documentation states:

    You must execute the steps in Section 3.3.2, “Loading Modified SQL Files into the Database” for any new or upgraded database.

     

    This is a significant change in behaviour and is the root cause of the PDB creation error!

     

    Resolving the “ORA-00600 [kql_tab_diana:new dep]” error

    Back to the CDB creation error, the first logical place to check whenever experiencing plug-in or PDB creation errors is the PDB_PLUG_IN_VIOLATIONS view:

    SQL> CREATE PLUGGABLE DATABASE MY_PROD FROM NON$CDB@clone_link FILE_NAME_CONVERT=('/u01/app/oracle/oradata/MY_PROD','/u01/app/oracle/oradata/CPRD1/MY_PROD');
    
    Pluggable database created.
    
    SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;
    
    NAME     TYPE      STATUS    MESSAGE                                  ACTION
    -------- --------- --------- ---------------------------------------- ----------------------------------------
    MY_PROD  ERROR     PENDING   PDB plugged in is a non-CDB, requires no Run noncdb_to_pdb.sql.
                                 ncdb_to_pdb.sql be run.
    
    MY_PROD  WARNING   PENDING   CDB parameter compatible mismatch: Previ Please check the parameter in the curren
                                 ous '11.2.0.4.0' Current '12.1.0.2.0'    t CDB
    
    MY_PROD  WARNING   PENDING   Service name or network name of service  Drop the service and recreate it with an
                                 MY_PROD in the PDB is invalid or conflic  appropriate name.
                                 ts with an existing service name or netw
                                 ork name in the CDB.
    
    
    SQL>
    

     

    Nothing there is really concerning yet. It’s pretty much what we’d expect to see at this point. However, taking the next step in the PDB clone process encounters the error:

    SQL> ALTER SESSION SET CONTAINER=MY_PROD;
    
    Session altered.
    
    SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
    ...
    ...
    SQL> DECLARE
      2     threads pls_integer := &&1;
      3  BEGIN
      4     utl_recomp.recomp_parallel(threads);
      5  END;
      6  /
    DECLARE
    *
    ERROR at line 1:
    ORA-04045: errors during recompilation/revalidation of SYS.DBMS_SQLPATCH
    ORA-00600: internal error code, arguments: [kql_tab_diana:new dep],
    [0x062623070], [0x7FB582065DE0], [1], [2], [], [], [], [], [], [], []
    ORA-06512: at "SYS.DBMS_UTILITY", line 1294
    ORA-06512: at line 1
    
    
    Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
    

     

    Now looking in the PDB_PLUG_IN_VIOLATIONS view the error is evident:

    SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;
    
    NAME     TYPE      STATUS    MESSAGE                                            ACTION
    -------- --------- --------- -------------------------------------------------- --------------------------------------------------
    MY_PROD  ERROR     PENDING   SQL patch ID/UID 22139226/19729684 (Database PSU 1 Call datapatch to install in the PDB or the CDB
                                 2.1.0.2.160119, Oracle JavaVM Component (Jan2016))
                                 : Installed in the PDB but not in the CDB.
    
    MY_PROD  ERROR     PENDING   PDB plugged in is a non-CDB, requires noncdb_to_pd Run noncdb_to_pdb.sql.
                                 b.sql be run.
    
    MY_PROD  WARNING   RESOLVED  Service name or network name of service MY_PROD in Drop the service and recreate it with an appropria
                                  the PDB is invalid or conflicts with an existing  te name.
                                 service name or network name in the CDB.
    
    MY_PROD  WARNING   RESOLVED  CDB parameter compatible mismatch: Previous '11.2. Please check the parameter in the current CDB
                                 0.4.0' Current '12.1.0.2.0'
    
    MY_PROD  WARNING   PENDING   Database option OLS mismatch: PDB installed versio Fix the database option in the PDB or the CDB
                                 n NULL. CDB installed version 12.1.0.2.0.
    
    MY_PROD  WARNING   PENDING   Database option DV mismatch: PDB installed version Fix the database option in the PDB or the CDB
                                  NULL. CDB installed version 12.1.0.2.0.
    
    
    6 rows selected.
    
    SQL>
    

     

    At this point since the CDB clone has failed and since the noncdb_to_pdb.sql script cannot be run twice, the new PDB should be dropped. Resolving the root cause of the error by patching and then repeating the clone is necessary.

    Applying the PSU

    Fortunately the fix is conceptually simple: apply the PSU patch into the database. Though the catch is that I actually had installed the “Combo of 12.1.0.2.160119 OJVM PSU and 12.1.0.2.160119 DB PSU (Jan 2016)” bundle patch (22191659) into the Oracle Home. This combo includes the DB PSU (patch 21948354) plus the OJVM PSU (patch 22139226). And while the DB PSU can be applied without outage, the OJVM patch cannot. Instead for the OJVM patch or the combo, the CDB and the PDBs must all be restarted in UPGRADE mode.

    Restarting in UPGRADE mode is fine in this case study where the CDB was just recently created to house the newly upgraded PDB. But if trying to plug the new database into an existing CDB with other applications running in production, shutting down the entire CDB to run datapatch may cause a problem.

    Following the README documentation for the just the JAN2016 DB PSU (patch 21948354) doesn’t help. It states that the patch can be applied the database and pluggable databases open (section “3.3.2 Loading Modified SQL Files into the Database“). However because I’ve installed the combo patch into the Oracle Home, trying to patch with the database open will cause the patching to fail:

    $ ./datapatch -verbose
    SQL Patching tool version 12.1.0.2.0 on Fri Mar  4 15:45:27 2016
    Copyright (c) 2015, Oracle.  All rights reserved.
    
    Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_3260_2016_03_04_15_45_27/sqlpatch_invocation.log
    
    Connecting to database...OK
    Note:  Datapatch will only apply or rollback SQL fixes for PDBs
           that are in an open state, no patches will be applied to closed PDBs.
           Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
           (Doc ID 1585822.1)
    Bootstrapping registry and package to current versions...done
    Determining current state...done
    
    Current state of SQL patches:
    Patch 22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016)):
      Installed in the binary registry only
    Bundle series PSU:
      ID 160119 in the binary registry and not installed in any PDB
    
    Adding patches to installation queue and performing prereq checks...
    Installation queue:
      For the following PDBs: CDB$ROOT PDB$SEED
        Nothing to roll back
        The following patches will be applied:
          22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016))
          21948354 (Database Patch Set Update : 12.1.0.2.160119 (21948354))
    
    Error: prereq checks failed!
      patch 22139226: The pluggable databases that need to be patched must be in upgrade mode
    Prereq check failed, exiting without installing any patches.
    
    Please refer to MOS Note 1609718.1 and/or the invocation log
    /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_3260_2016_03_04_15_45_27/sqlpatch_invocation.log
    for information on how to resolve the above errors.
    
    SQL Patching tool complete on Fri Mar  4 15:45:52 2016
    $
    

     

    The solution to this error is to start the CDB and PDBs in UPGRADE mode (as per the OJVM patch documentation) and then re-run datapatch:

    SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SQL> startup upgrade;
    ORACLE instance started.
    
    Total System Global Area 2097152000 bytes
    Fixed Size                  2926320 bytes
    Variable Size             603982096 bytes
    Database Buffers         1476395008 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
    Database opened.
    SQL> alter pluggable database all open upgrade;
    
    Pluggable database altered.
    
    SQL> exit
    Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
    
    $ ./datapatch -verbose
    SQL Patching tool version 12.1.0.2.0 on Fri Mar  4 15:50:59 2016
    Copyright (c) 2015, Oracle.  All rights reserved.
    
    Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_5137_2016_03_04_15_50_59/sqlpatch_invocation.log
    
    Connecting to database...OK
    Note:  Datapatch will only apply or rollback SQL fixes for PDBs
           that are in an open state, no patches will be applied to closed PDBs.
           Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
           (Doc ID 1585822.1)
    Bootstrapping registry and package to current versions...done
    Determining current state...done
    
    Current state of SQL patches:
    Patch 22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016)):
      Installed in the binary registry only
    Bundle series PSU:
      ID 160119 in the binary registry and not installed in any PDB
    
    Adding patches to installation queue and performing prereq checks...
    Installation queue:
      For the following PDBs: CDB$ROOT PDB$SEED
        Nothing to roll back
        The following patches will be applied:
          22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016))
          21948354 (Database Patch Set Update : 12.1.0.2.160119 (21948354))
    
    Installing patches...
    Patch installation complete.  Total patches installed: 8
    
    Validating logfiles...
    Patch 22139226 apply (pdb CDB$ROOT): SUCCESS
      logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/22139226/19729684/22139226_apply_CPRD1_CDBROOT_2016Mar04_15_51_23.log (no errors)
    Patch 21948354 apply (pdb CDB$ROOT): SUCCESS
      logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_CPRD1_CDBROOT_2016Mar04_15_51_24.log (no errors)
    Patch 22139226 apply (pdb PDB$SEED): SUCCESS
      logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/22139226/19729684/22139226_apply_CPRD1_PDBSEED_2016Mar04_15_51_28.log (no errors)
    Patch 21948354 apply (pdb PDB$SEED): SUCCESS
      logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_CPRD1_PDBSEED_2016Mar04_15_51_29.log (no errors)
    SQL Patching tool complete on Fri Mar  4 15:51:31 2016
    $
    

     

    Now retrying the CDB cloning process:

    SQL> ALTER SESSION SET CONTAINER=MY_PROD;
    
    Session altered.
    
    SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
    ...
    ...
    
    SQL> alter session set container = "&pdbname";
    
    Session altered.
    
    SQL>
    SQL> -- leave the PDB in the same state it was when we started
    SQL> BEGIN
      2    execute immediate '&open_sql &restricted_state';
      3  EXCEPTION
      4    WHEN OTHERS THEN
      5    BEGIN
      6      IF (sqlcode  -900) THEN
      7        RAISE;
      8      END IF;
      9    END;
     10  END;
     11  /
    
    PL/SQL procedure successfully completed.
    
    SQL>
    SQL>
    SQL> WHENEVER SQLERROR CONTINUE;
    SQL> ALTER PLUGGABLE DATABASE MY_PROD OPEN;
    
    Warning: PDB altered with errors.
    
    SQL> connect / as sysdba
    Connected.
    SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;
    
    NAME     TYPE      STATUS    MESSAGE                                            ACTION
    -------- --------- --------- -------------------------------------------------- --------------------------------------------------
    MY_PROD  ERROR     RESOLVED  PDB plugged in is a non-CDB, requires noncdb_to_pd Run noncdb_to_pdb.sql.
                                 b.sql be run.
    
    MY_PROD  ERROR     PENDING   PSU bundle patch 160119 (Database Patch Set Update Call datapatch to install in the PDB or the CDB
                                  : 12.1.0.2.160119 (21948354)): Installed in the C
                                 DB but not in the PDB.
    
    MY_PROD  WARNING   RESOLVED  Service name or network name of service MY_PROD in Drop the service and recreate it with an appropria
                                  the PDB is invalid or conflicts with an existing  te name.
                                 service name or network name in the CDB.
    
    MY_PROD  WARNING   PENDING   Database option OLS mismatch: PDB installed versio Fix the database option in the PDB or the CDB
                                 n NULL. CDB installed version 12.1.0.2.0.
    
    MY_PROD  WARNING   PENDING   Database option DV mismatch: PDB installed version Fix the database option in the PDB or the CDB
                                  NULL. CDB installed version 12.1.0.2.0.
    
    MY_PROD  WARNING   RESOLVED  CDB parameter compatible mismatch: Previous '11.2. Please check the parameter in the current CDB
                                 0.4.0' Current '12.1.0.2.0'
    
    
    6 rows selected.
    
    SQL>
    

     

    Note that first time the error was related to the OJVM PSU patch and stated that the PDB was patched but the CDB was not. Now after patching the CDB the error message states that the DB PSU patch is installed in the CDB but not the PDB.

    Again the solution is to run datapatch one more time. Fortunately since we’re only patching a PDB, we no longer need to worry about starting the CDB and PDBs in UPGRADE mode to apply the OJVM patch. The OJVM patch does not apply to the PDBs.  Hence we can patch successfully with both the CDB and PDBs open:

    $ ./datapatch -verbose
    SQL Patching tool version 12.1.0.2.0 on Fri Mar  4 16:19:06 2016
    Copyright (c) 2015, Oracle.  All rights reserved.
    
    Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_9245_2016_03_04_16_19_06/sqlpatch_invocation.log
    
    Connecting to database...OK
    Note:  Datapatch will only apply or rollback SQL fixes for PDBs
           that are in an open state, no patches will be applied to closed PDBs.
           Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
           (Doc ID 1585822.1)
    Bootstrapping registry and package to current versions...done
    Determining current state...done
    
    Current state of SQL patches:
    Patch 22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016)):
      Installed in binary and CDB$ROOT PDB$SEED MY_PROD
    Bundle series PSU:
      ID 160119 in the binary registry and ID 160119 in PDB CDB$ROOT, ID 160119 in PDB PDB$SEED
    
    Adding patches to installation queue and performing prereq checks...
    Installation queue:
      For the following PDBs: CDB$ROOT PDB$SEED
        Nothing to roll back
        Nothing to apply
      For the following PDBs: MY_PROD
        Nothing to roll back
        The following patches will be applied:
          21948354 (Database Patch Set Update : 12.1.0.2.160119 (21948354))
    
    Installing patches...
    Patch installation complete.  Total patches installed: 1
    
    Validating logfiles...
    Patch 21948354 apply (pdb MY_PROD): SUCCESS
      logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_CPRD1_MY_PROD_2016Mar04_16_19_31.log (no errors)
    SQL Patching tool complete on Fri Mar  4 16:19:32 2016
    $
    

     

    And finally the cloned PDB can be opened successfully:

    SQL> ALTER PLUGGABLE DATABASE MY_PROD CLOSE;
    
    Pluggable database altered.
    
    SQL> ALTER PLUGGABLE DATABASE MY_PROD OPEN;
    
    Pluggable database altered.
    
    SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
             3 MY_PROD                        READ WRITE NO
    
    SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;
    
    NAME     TYPE      STATUS    MESSAGE                                            ACTION
    -------- --------- --------- -------------------------------------------------- --------------------------------------------------
    MY_PROD  ERROR     RESOLVED  PDB plugged in is a non-CDB, requires noncdb_to_pd Run noncdb_to_pdb.sql.
                                 b.sql be run.
    
    MY_PROD  ERROR     RESOLVED  PSU bundle patch 160119 (Database Patch Set Update Call datapatch to install in the PDB or the CDB
                                  : 12.1.0.2.160119 (21948354)): Installed in the C
                                 DB but not in the PDB.
    
    MY_PROD  WARNING   RESOLVED  Service name or network name of service MY_PROD in Drop the service and recreate it with an appropria
                                  the PDB is invalid or conflicts with an existing  te name.
                                 service name or network name in the CDB.
    
    MY_PROD  WARNING   PENDING   Database option OLS mismatch: PDB installed versio Fix the database option in the PDB or the CDB
                                 n NULL. CDB installed version 12.1.0.2.0.
    
    MY_PROD  WARNING   PENDING   Database option DV mismatch: PDB installed version Fix the database option in the PDB or the CDB
                                  NULL. CDB installed version 12.1.0.2.0.
    
    MY_PROD  WARNING   RESOLVED  CDB parameter compatible mismatch: Previous '11.2. Please check the parameter in the current CDB
                                 0.4.0' Current '12.1.0.2.0'
    
    
    6 rows selected.
    
    SQL>
    

    The warnings marked as “PENDING” can be safely ignored.

    Conclusion

    What started out as an issue when cloning a non-CDB into a PDB led to some learning about patching with Oracle Database 12c.

    The most important take-away is that Oracle Database 12c introduces a change in behaviour when it comes to patch applications through the DBCA. This change is well documented in both the patch and MOS documents. So if a DBA reads through the documentation thoroughly, they won’t have a problem.  However if the DBA is used to doing things the “old way” and only skims through the documentation they may unexpectedly get caught with errors such as the ORA-00600 encountered when creating a PDB through cloning.

    References

    My Oracle Support (MOS) Documents:

    • 12.1:DBCA(Database Creation) does not execute “datapatch” (Doc ID 2084676.1)
    • How to Convert Non PDB to PDB Database in 12c – Testcase (Doc ID 2012448.1)
    • How to migrate an existing pre12c database(nonCDB) to 12c CDB database ? (Doc ID 1564657.1)
    • Complete Checklist for Upgrading to Oracle Database 12c Release 1 using DBUA (Doc ID 1516557.1)

    Pythian Blogs:

     

    Categories: DBA Blogs

    Wrong Results ?

    Jonathan Lewis - Fri, 2016-03-11 03:18

    I gather that journalistic style dictates that if the headline is a question then the answer is no. So, following on from a discussion of possible side effects of partition exchange, let’s look at an example which doesn’t involve partitions.  I’ve got a schema that holds nothing by two small, simple heap tables, parent and child, (with declared primary keys and the obvious referential integrity constraint) and I run a couple of very similar queries that produce remarkably different results:

    
    select
            par.id      parent_id,
            chi.id      child_id,
            chi.name    child_name
    from
            parent  par,
            child   chi
    where
            chi.id_p = par.id
    order by
            par.id, chi.id
    ;
    
     PARENT_ID   CHILD_ID CHILD_NAME
    ---------- ---------- ----------
             1          1 Simon
             1          2 Sally
             2          1 Janet
             2          2 John
             3          1 Orphan
    
    5 rows selected.
    
    

    Having got this far with my first query I’ve decided to add the parent name to the report:

    
    select
            par.id      parent_id,
            par.name    parent_name,
            chi.id      child_id,
            chi.name    child_name
    from
            parent  par,
            child   chi
    where
            chi.id_p = par.id
    order by
            par.id, chi.id
    ;
    
     PARENT_ID PARENT_NAM   CHILD_ID CHILD_NAME
    ---------- ---------- ---------- ----------
             1 Smith2              1 Simon
             1 Smith               1 Simon
             1 Smith2              2 Sally
             1 Smith               2 Sally
             2 Jones               1 Janet
             2 Jones               2 John
    
    6 rows selected.
    
    

    How could adding a column to the select list result in one child row disappearing and two child rows being duplicated; and is this a bug ?

    To avoid any confusion, here’s the complete script I used for creating the schema owner, in 11.2.0.4, with no extra privileges granted to PUBLIC:

    
    create user u1
            identified by u1
            default tablespace test_8k
            quota unlimited on test_8k
    ;
    
    grant
            create session,
            create table
    to
            u1
    ;
    
    
    
    Update

    It didn’t take long for a couple of people to suggest that the oddity was the consequence of constraints that had not been enabled and validated 100% of the time, but the suggestions offered were a little more convoluted than necessary. Here’s the code I ran from my brand new account before running the two select statements:

    
    create table parent (
            id      number(4),
            name    varchar2(10),
            constraint par_pk primary key (id)
            rely disable novalidate
    )
    ;
    
    create table child(
            id_p    number(4)
                    constraint chi_fk_par
                    references parent
                    on delete cascade
                    rely disable novalidate,
            id      number(4),
            name    varchar2(10),
            constraint chi_pk primary key (id_p, id)
                    rely disable novalidate
    )
    ;
    
    insert into parent values (1,'Smith');
    insert into parent values (1,'Smith2');
    insert into parent values (2,'Jones');
    
    insert into child values(1,1,'Simon');
    insert into child values(1,2,'Sally');
    
    insert into child values(2,1,'Janet');
    insert into child values(2,2,'John');
    
    insert into child values(3,1,'Orphan');
    
    commit;
    
    begin
            dbms_stats.gather_table_stats(user,'child');
            dbms_stats.gather_table_stats(user,'parent');
    end;
    /
    
    
    

    In a typical data warehouse frame of mind I’ve added plenty of constraints, but left them all disabled and novalidated, but told Oracle to rely on them for optimisation strategies. This means all sorts of incorrect data could get into the tables, with all sorts of unexpected side effects on reporting. The example above shows duplicates on primary keys (and if you checked the table definition you’d find that the primary key columns were nullable as well), child rows with no parent key.

    In fact 11g and 12c behave differently – the appearance of the Orphan row in the first sample query is due, as Chris_cc pointed out in the first comment, to the optimizer deciding that it could use join elimination because it was joining to a single-column primary key without selecting any other columns from the referenced table. In 12c the optimizer doesn’t use join elimination for this query, so both queries have the same (duplicated) output.

    Update:

    Make sure you read the articles linked to by Dani Schneider’s comment below, and note especially the impact on the query_rewrite_integrity parameter.


    Centralized authentication with Red Hat Directory Server for Linux systems

    Pythian Group - Thu, 2016-03-10 15:14

    User management on Linux systems can be tedious, and when you add in more than 10 systems the chances are it is going to take a good amount of time for you to manage user accounts on all systems individually.

    There are various tools available to overcome this, and all of these use LDAP in some way.

    The same goes for Red Hat Directory Server, which is an extension of LDAP by Red Hat to provide centralized user management. Though I have primarily demonstrated integration with Red Hat Directory Server with Linux systems, it can be used on all systems which supports LDAP authentication.

    You can find the official Red Hat Directory Server installation guide here.

    For our test scenario I used two RHEL 5 servers named as Server101 which is the Red Hat Directory Server and Server201 which is the client.

    For RHEL based systems you need to make sure that you are subscribed to RHDS repo for installing Red Hat Directory Server. If you are using CentOS or other derivatives you can use 389-Directory Server which is upstream for Red Hat Directory Server.

    Once you have the prerequisite ready you can start with installation.

    Installing Red Hat Directory Server

    I have designated server101 as Red Hat Directory Server.

    Below are the steps to Install packages required for Red Hat Directory Server.

    [root@server101 ~]#yum install redhat-ds -y

    yum install redhat-ds

    Installing RHDS

    RHDS005

    RHDS006

     

    Once the installation is complete we can move to configuring Red Hat Directory Server.

     

    Configuring Red Hat Directory Server

     

    [root@server101 ~]# setup-ds-admin.pl

    Once you run this command you will be prompted for inputs by the setup script which are mostly straight forward.

    But there are few things that need to be taken care of before we proceed with the configuration.

    We want to run the ldap service as ldap user, so create ldap user and group if its not there.

    Then open the below ports on your firewall/iptables so that directory server can work properly.

    • 389 for LDAP service
    • 686 for secure LDAP service
    • 9830 for directory server admin console connectivity

    You should also increase the number of file descriptors as it can help Red Hat Directory Server access files more efficiently. Editing the maximum number of file descriptors the kernel can allocate can also improve file access speeds.

    • First, check the current limit for file descriptors in  /proc/sys/fs/file-max
    • If the setting is lower than 64000, edit the /etc/sysctl.conf file, and reset the fs.file-max parameter and set it to 64000 or up.
    • Then increase the maximum number of open files on the system by editing the /etc/security/limits.conf configuration file. Add the following entry
      *        -        nofile        8192

     

    After this we can start configuring Red Hat Directory Server with setup-ds-admin.pl command.

    Once it is executed it will prompt for inputs which are mostly self explanatory, like below. Mostly we will accept the default values, as this is a fresh installation. We will only change the system user and group which will run ldap service from nobody to ldap user we created earlier. Don’t forget to make a note of passwords for admin and Directory Manager as it will be used to login to Admin Console.

     

    [root@server101 ~]# setup-ds-admin.pl -k

    ==============================================================================
    This program will set up the Red Hat Directory and Administration Servers.

    It is recommended that you have “root” privilege to set up the software.
    Tips for using this program:
    – Press “Enter” to choose the default and go to the next screen
    – Type “Control-B” then “Enter” to go back to the previous screen
    – Type “Control-C” to cancel the setup program

    Would you like to continue with set up? [yes]: yes

    ==============================================================================
    BY SETTING UP AND USING THIS SOFTWARE YOU ARE CONSENTING TO BE BOUND BY
    AND ARE BECOMING A PARTY TO THE AGREEMENT FOUND IN THE
    LICENSE.TXT FILE. IF YOU DO NOT AGREE TO ALL OF THE TERMS
    OF THIS AGREEMENT, PLEASE DO NOT SET UP OR USE THIS SOFTWARE.

    Do you agree to the license terms? [no]: yes

    ==============================================================================
    Your system has been scanned for potential problems, missing patches,
    etc. The following output is a report of the items found that need to
    be addressed before running this software in a production
    environment.

    Red Hat Directory Server system tuning analysis version 10-AUGUST-2007.

    NOTICE : System is i686-unknown-linux2.6.18-308.el5 (1 processor).

    WARNING: 502MB of physical memory is available on the system. 1024MB is recommended for best performance on large production system.

    NOTICE: The net.ipv4.tcp_keepalive_time is set to 7200000 milliseconds
    (120 minutes). This may cause temporary server congestion from lost
    client connections.

    WARNING: There are only 1024 file descriptors (hard limit) available, which
    limit the number of simultaneous connections.

    WARNING: There are only 1024 file descriptors (soft limit) available, which
    limit the number of simultaneous connections.

    Would you like to continue? [no]: yes

    ==============================================================================
    Choose a setup type:

    1. Express
    Allows you to quickly set up the servers using the most
    common options and pre-defined defaults. Useful for quick
    evaluation of the products.

    2. Typical
    Allows you to specify common defaults and options.

    3. Custom
    Allows you to specify more advanced options. This is
    recommended for experienced server administrators only.

    To accept the default shown in brackets, press the Enter key.

    Choose a setup type [2]:

    ==============================================================================
    Enter the fully qualified domain name of the computer
    on which you’re setting up server software. Using the form
    <hostname>.<domainname>
    Example: eros.example.com.

    To accept the default shown in brackets, press the Enter key.

    Computer name [server101.suratlug.org]: server101.example.com

    ==============================================================================
    The servers must run as a specific user in a specific group.
    It is strongly recommended that this user should have no privileges
    on the computer (i.e. a non-root user). The setup procedure
    will give this user/group some permissions in specific paths/files
    to perform server-specific operations.

    If you have not yet created a user and group for the servers,
    create this user and group using your native operating
    system utilities.

    System User [nobody]: ldap
    System Group [nobody]: ldap

    ==============================================================================
    Server information is stored in the configuration directory server.
    This information is used by the console and administration server to
    configure and manage your servers.  If you have already set up a
    configuration directory server, you should register any servers you
    set up or create with the configuration server. To do so, the
    following information about the configuration server is required: the
    fully qualified host name of the form
    <hostname>.<domainname>(e.g. hostname.example.com), the port number
    (default 389), the suffix, the DN and password of a user having
    permission to write the configuration information, usually the
    configuration directory administrator, and if you are using security
    (TLS/SSL). If you are using TLS/SSL, specify the TLS/SSL (LDAPS) port
    number (default 636) instead of the regular LDAP port number, and
    provide the CA certificate (in PEM/ASCII format).

    If you do not yet have a configuration directory server, enter ‘No’ to
    be prompted to set up one.

    Do you want to register this software with an existing
    configuration directory server? [no]:

    ==============================================================================
    Please enter the administrator ID for the configuration directory
    server. This is the ID typically used to log in to the console.  You
    will also be prompted for the password.

    Configuration directory server
    administrator ID [admin]:
    Password:
    Password (confirm):

    ==============================================================================
    The information stored in the configuration directory server can be
    separated into different Administration Domains. If you are managing
    multiple software releases at the same time, or managing information
    about multiple domains, you may use the Administration Domain to keep
    them separate.

    If you are not using administrative domains, press Enter to select the
    default. Otherwise, enter some descriptive, unique name for the
    administration domain, such as the name of the organization
    responsible for managing the domain.

    Administration Domain [example.com]:

    ==============================================================================
    The standard directory server network port number is 389. However, if
    you are not logged as the superuser, or port 389 is in use, the
    default value will be a random unused port number greater than 1024.
    If you want to use port 389, make sure that you are logged in as the
    superuser, that port 389 is not in use.

    Directory server network port [389]:

    ==============================================================================
    Each instance of a directory server requires a unique identifier.
    This identifier is used to name the various
    instance specific files and directories in the file system,
    as well as for other uses as a server instance identifier.

    Directory server identifier [server101]:

    ==============================================================================
    The suffix is the root of your directory tree.  The suffix must be a valid DN.
    It is recommended that you use the dc=domaincomponent suffix convention.
    For example, if your domain is example.com,
    you should use dc=example,dc=com for your suffix.
    Setup will create this initial suffix for you,
    but you may have more than one suffix.
    Use the directory server utilities to create additional suffixes.

    Suffix [dc=example, dc=com]:

    ==============================================================================
    Certain directory server operations require an administrative user.
    This user is referred to as the Directory Manager and typically has a
    bind Distinguished Name (DN) of cn=Directory Manager.
    You will also be prompted for the password for this user. The password must
    be at least 8 characters long, and contain no spaces.
    Press Control-B or type the word “back”, then Enter to back up and start over.

    Directory Manager DN [cn=Directory Manager]:
    Password:
    Password (confirm):

    ==============================================================================
    The Administration Server is separate from any of your web or application
    servers since it listens to a different port and access to it is
    restricted.

    Pick a port number between 1024 and 65535 to run your Administration
    Server on. You should NOT use a port number which you plan to
    run a web or application server on, rather, select a number which you
    will remember and which will not be used for anything else.

    Administration port [9830]:

    ==============================================================================
    The interactive phase is complete.  The script will now set up your
    servers.  Enter No or go Back if you want to change something.

    Are you ready to set up your servers? [yes]:
    Creating directory server . . .
    Your new DS instance ‘server101’ was successfully created.
    Creating the configuration directory server . . .
    Beginning Admin Server creation . . .
    Creating Admin Server files and directories . . .
    Updating adm.conf . . .
    Updating admpw . . .
    Registering admin server with the configuration directory server . . .
    Updating adm.conf with information from configuration directory server . . .
    Updating the configuration for the httpd engine . . .
    Starting admin server . . .
    The admin server was successfully started.
    Admin server was successfully created, configured, and started.
    Exiting . . .
    Log file is ‘/tmp/setupZa3jGe.log’

    [root@server101 ~]#

    RHDS012

    RHDS013

    RHDS020

    RHDS021

    RHDS022

     

    Now that we have installed and configured Red Hat Directory Server its not set to autostart during system boot.

    So we need to make Red Hat directory service and redhat directory console admin service to start at boot.

    [root@server101 ~]# chkconfig dirsrv-admin --list 
    dirsrv-admin   0:off1:off2:off3:off4:off5:off6:off 
    [root@server101 ~]# chkconfig dirsrv --list 
    dirsrv         0:off1:off2:off3:off4:off5:off6:off 
    [root@server101 ~]# chkconfig dirservrv on 
    [root@server101 ~]# chkconfig dirsrv-admin on 
    [root@server101 ~]#

    Now that we have our server ready, we need to add a user to it.

    We will use Directory Server admin console to connect to the GUI and will create ldap user from there.

    We can invoke directory server admin console gui with redhat-idm-console. It will open a GUI like below.

    Directory Server Admin Console GUI

    Directory Server Admin Console GUI

    The user id is directory manager which was created during directory server setup, mostly it will be default as cn=Directory Manager. Now put your password and Administration url as http://server101:9830.

    Directory Server Admin Console

    Directory Server Admin Console

    Once you login you will be presented with console screen as below.

    RHDS051

    Now click on Users and Groups tab and then click on create button, there select user from the menu.

    RHDS031  RHDS032

    Now Select organizational unit, mostly we will use the default and will select people from the list as below.

    RHDS033

    It will open Create User menu.

    RHDS034

    Now we will create ldapuser account as shown below. Fill in required details. Also select posix user tab as we need the account for unix system login. Fill up required details for posix account as well.

    RHDS035   RHDS036

    RHDS037

    Now that we have created user account we can start configuring client.

     

    Configuring Linux client for LDAP login

    I have created server201 which we will configure for LDAP login.

    For that we need to execute authconfig-tui from console.

    It will open a terminal ui to configure authconfig to use LDAP.

    [root@server201 pam.d]# authconfig-tui

    RHDS040

    Select Use LDAP for user information.

    RHDS041

    Select Use LDAP Authentication.

    RHDS042

    After this we need to make sure when user login on the server with LDAP authentication the home dir is created automatically, which is not enabled by default.

    We can do this by executing below command at console.

    [root@server201 pam.d]# authconfig –enablemkhomedir –update

    RHDS052

    Once this is done you can now use your ldap user to login to client server.

    Now that we have created LDAP, we can use it to centralized login for all linux systems in the environment.

    The user management is easy from single location.

    We can also configure TLS and do replication for redundancy.

    We can define schema and policies as well but that is for another time.

     

    Categories: DBA Blogs

    Log Buffer #464: A Carnival of the Vanities for DBAs

    Pythian Group - Thu, 2016-03-10 13:42

    This Log Buffer Edition delves deep into the realms of Oracle, SQL Server and MySQL while gathering up some nifty blog posts for this week.


    Oracle:

    Speed, Security, and Best Practices in the Cloud: Oracle Releases Market-Leading Retail Demand Forecasting Solution

    OBIEE 12c – Your Answers After Upgrading

    Using the SQL ACCESS Advisor PL/SQL interface

    How has JD Edwards EnterpriseOne 9.2 Transformed your Business?

    In the article you will have a look at an example of configuring Fast Start Failover (FSFO).

    SQL Server:

    How to show Quarters Dynamically in SQL

    Azure SQL Data Warehouse is a fully-managed and scalable cloud service. It is still in preview, but solid.

    The occasional problems that you can get with POST and GET are typical of the difficulties of separating any command and query operations.

    4 Convenient Ways To Run PowerShell Scripts

    10 New Features Worth Exploring in SQL Server 2016

    MySQL:

    Maintaining mission critical databases on our pitchfork wielding brother, the “Daemon” of FreeBSD, seems quite daunting, or even absurd, from the perspective of a die-hard Linux expert, or from someone who has not touched it in a long time.

    Planets9s: Sign up for our best practices webinar on how to upgrade to MySQL 5.7

    Using jemalloc heap profiling with MySQL

    Sometimes a Variety of Databases is THE Database You Need

    Taking the new MySQL 5.7 JSON features for a test drive

    Categories: DBA Blogs

    Webcast Q&A: Next-Generation Accounts Payable Automation and Dynamic Discounting

    WebCenter Team - Thu, 2016-03-10 13:12

    Next-Generation Accounts Payable Automation and Dynamic Discounting

    We wanted to capture the great Q&A session that occurred during the Next-Generation Accounts Payable Automation and Dynamic Discounting webcast! If you missed the live webcast, you can view the on demand webcast here.

    Q: A lot of your competitors claim they can provide 80% automation. How is your offering different?
    A: What we provide is measurable automation - this is what most of our customers are getting, The automation we talk about is end-to-end process automation. Not just for a portion of the process. When our competitor’s talk about 80% automation, they are talking about what you could potentially get with OCR. They provide really poor integration with your ERP system and that is where the real problem is. That is the traditional approach where after OCR, about 82% of invoices end up with exceptions in your ERP system and so your AP personnel have to manually resolve those invoices one-by-one. o Our next generation approach provides you end-to-end automation. Not only do we provide best-in-class OCR, but we have cracked the code on how we integrate real-time with your ERP systems and provide exception-free creation of invoices and 2-way and 3-way matching.

    Q: Can your cloud offering integrate with our on-premise ERP systems? 
    A: We have Oracle E-Business Suite and JD Edwards. Yes, our cloud offering can integrate with your on-premise of cloud ERP systems. A lot of our customers have different ERP systems. We can integrate with multiple ERP systems seamlessly and provide real-time integration, and unified Application, Workflow and Analytics across all your multiple ERP systems.

    Q: How is this different from Fusion AP? And Fusion Automated Invoice Processing?
    A: Fusion AP and Automated Invoice processing uses the traditional approach. 1. There is almost no control on the OCR engine that is provided 2. unvalidated data is passed over to Fusion AP where all exceptions have to be handled manually one by one 3. 2-way matching is only partially automated 4. 3-way matching is not automated at all 5. Workflow capabilities are almost non-existent with very little ability to do re-assignment, routing 6. Work-queues capabilities are almost non-existent

    Q: How is your 2-way and 3-way matching different from competition?
    A: There are vendors who claim they do automated 2-way and 3-way match. However they handle a small % of the use-cases. E.g. for 2-way matching, invoices that need to be matched against blanket Pos are not properly handled For 3-way matching, cases where receipts happen after invoices come in are not handled These are just a few examples. Inspyrus provides a complete solution - that handles all such use-cases. Tried and tested with customers across a lot of verticals.

    Q: We receive invoices via mail, email and EDI. Can your offering provide us a consistent process for all these?
    A: Yes. Irrespective of how you receive your invoices, we provide a consistent Application, Workflow and Analytics for all of these.

    Q: We have Oracle E-Business Suite and SAP for our ERP systems. Will your solution integrate with both our ERP systems?
    A: Yes,our solutions comes pre-integrated with Oracle E-Business Suite and SAP and if you have both ERP systems, a single instance of our application will be integrated with both.

    Q: Is the solution set specific to Oracle's eBusiness suite or can this solution bolt on to something like MS Dynamics to replace the AP transactions processing?
    A: The solution is available for most major ERP systems including MS Dynamics. Also available for SAP, PeopleSoft & JD Edwards.

    Q: 100% of our invoices are coded to a Project/Task/Expenditure Type in E Business Suite. Does this support full coding validation against Project related invoices?
    A: Yes, it does.

    Q: How does this solution compare to BPM?
    A: BPM is a technology. What we are presenting here is a specialized pre-built Solution that is based on (leverages) Oracles BPM technology, along with Imaging, Content Management, OCR/OFR and SOA integration.

    Q: What is OCR?
    A: OCR - Optical Character Recognition. It allows characters to be extracted from an image. e.g. for an invoice, it allows us to automatically extract header and line-level information.

    Q: We have Oracle E-Business Suite and SAP for our ERP systems. Will your solution integrate with both our ERP systems?
    A: Yes,our solutions comes pre-integrated with Oracle E-Business Suite and SAP and if you have both ERP systems, a single instance of our application will be integrated with both.

    Q: Would this solution work if we have a mix of invoices where some are match to po and some are match to receipt?
    A: Yes, that is very common.

    Q: How is this different from iSupplier? Can we use this instead of iSupplier?
    A: If you are using iSupplier, I wouldn't suggest replacing it. If you are not, this would be a good alternative.

    Q: What kind of validations happens when it hits the unified workflow?
    A: Whatever is required for the successful creation of the invoice in the ERP system. Basically, validation against every setup, rule, config of the ERP system.

    Q: Will this work if I have many suppliers with different invoices formats?
    A: Yes - The solution leverages pattern-based recognition rather than relying on invoice templates.

    Q: Supplier Enablement - is that integrated with the ERP systems? And is it integrated with your invoice automation?
    A: Yes, it is. That is a clear differentiator. Invoice Automation, Supplier Enablement and Dynamic Discounting are part of the same suite of applications.

    Q: Do you have the capability of electronic signatures on invoices?
    A: Yes.

    Q: Would we need to configure our matching rules within the tool?
    A: No, we use matching rules that are setup in your ERP system.

    Q: How do you automate GL How do you onboard customers for dynamic discounting?
    A: We can use specific rules to automate GL coding. e.g. if you want to use specific vendor or invoice line description and use that to always code the invoice to a specific account. Suppliers are onboarded for dynamic discounting using a specialized Dynamic Discounting program. That consists of identifying suppliers that have the highest propensity to provide you discounts and targeting them. The onboarding is done by an outreach program.

    Q: What's involved to get to automated GL coding?
    A: If there are specific business rules that you can tell us to automate GL coding - say for a particular vendor or for certain descriptions on invoice lines, we can automate GL coding.

    Q: Is the integration to the ERP systems unidirectional or bidirectional?
    A: Our integration is real-time with the ERP system. We don't need to store any ERP information in our system. We do bring in information about payments back into our system - thus making it bidirectional.

    Q: Is complex approval rules able to be used with this application?
    A: Yes, we can handle all kinds of complex approval rules.

    Q: Does it work with third party ocr solution?
    A: It could work with third party OCR if that ocr is able to send out a structured document (e.g. XML) after the OCR
    100% of our invoices are coded to a Project/Task/Expenditure Type in E Business Suite. 

    Q: Does this support full coding validation against Project related invoices?
    A: Yes, it does.

    Q: Can vendors integrate into this solution as well, as in submitting invoices EDI to the cloud offereing (instead of emailing to customer who then turns around and uploaded into AP solution)?
    A: Absolutely. Then can send EDI invoices to our cloud.

    Q: Will the 3 way match verify the Project number, from the Oracle Projects module?
    A: Yes, it can.

    Q: Can we self-host?
    A: Yes, this can be hosted in your data center.

    Q: Why would you pay an invoice prior to approval?
    A: The workflow/validation process will ensure that the invoice is approved before it is submitted for payment.

    Q: Is Oracle SOA required for downstream integration to other ERP, including SAP, etc?
    A: Oracle SOA comes embedded in this solution. That is the integration framework we use to integrate with all the ERP systems.

    Q: Do you offer French user interface ? Also, do you host in Canada?
    A: Yes, the interface is available in French. Our hosting partner, Rackspace, offers hosting options in Canada.

    Q: Do you have the capability for invoices to obe signed off electronically by an authorized signer?
    A: Yes, all approvals are electronic.

    Q: Is one of the 24 languages covered by OCR Chinese?
    A: Simplified Chinese - yes

    Q: Do you offer e-payment?
    A: Payments are generally done as part of your regular payment process. We do not provide any capability for that.

    Q: Do suppliers submit invoices directly to Inspyrus or EBS?
    A: They can do that via email or send EDI invoices.

    Q: Will it integrate with an ancient PO/RO system that is not Oracle?
    A: Yes, we have the ability to integrate with 3rd party PO systems.

    Q: Can you briefly explain how this is based on Oracle webcenter? We have WebCenter right now and we want to know how we can utilize it.
    A: Yes, it is built on Oracle WebCenter. You can reach out to Inspyrus for more information. www.inspyrus.com

    Q: After the OCR data extraction, if there are any errors/mistakes, how are they corrected before pushing into the ERP?
    A: Inspyrus provides a unified application where these are corrected - as part of the workflow.

    Q: You replied that all your approvals are electronic - can they be visible like a digital signature in pdf?
    A: We do not touch the pdf - for compliance reasons. The electronic approvals are available as a separate document tied to the invoice record.

    Q: What criteria/invoices should satisfy for Automatic GL coding proper work?
    A: If there are specific business rules that you can tell us to automate GL coding - say for a particular vendor or for certain descriptions on invoice lines, we can automate GL coding.

    Pages

    Subscribe to Oracle FAQ aggregator