Feed aggregator

History of a cell

Tom Kyte - Wed, 2017-05-17 19:46
Hello, the purpose of my task is to track the modifications on a table. More precisely i want to know if a cell has been updated or a new row inserted. I know i can use an audit tables to do that but it is not convenient because with an audit table...
Categories: DBA Blogs

how indexes handled by reference partitioning

Tom Kyte - Wed, 2017-05-17 19:46
when we have parent child tables partitioned by reference something like <code> CREATE TABLE orders ( order_id NUMBER(12), order_date date, order_mode VARCHAR2(8), customer_id NUMBER(6), ...
Categories: DBA Blogs

filesystem_like_logging is not in ddl of table

Tom Kyte - Wed, 2017-05-17 19:46
Hi, A table with blob column and filesystem_like_logging feature is created using following script: <code>create table contracts_sec_fs ( contract_id number(12), contract_name varchar2(80), file_size numb...
Categories: DBA Blogs

SOA Suite Security with Inbound Web Services

Anthony Shorten - Wed, 2017-05-17 19:06

With the introduction of Inbound Web Services the integration between these services and Oracle SOA Suite now has a few more options in terms of security.

  • It is possible to specify the WS-Policy to use to secure the transport and message sent to the product web service on the SOA Composite. The product supports more than one WS-Policy per service and any composite must conform to one of those policies.
  • As with older versions of the product and SOA Suite, you can specify the csf-key within the domain itself. This key holds the credentials of the interface in meta-data so that it avoids hardcoding the credentials in each call. This also means you can manage credentials from the console independently of the composite. In the latest releases it is possible to specify the csf-map as well (in past releases you had to use oracle.wsm.security as the map).

Now the process to do the configuration is as follows:

  • Using Oracle Fusion Middleware control, select the Oracle SOA Suite domain (usually soa_domain) and add the credentials (and map) to the domain. The credentials can be shared across composites or you choose to setup multiple credentials (one for each interface for example). In the example below, the map is the default oracle.wsm.security map and key is ouaf.key (just for the example):

Example Key and Map

  • Now the credentials and the WS-Policies need to be specified on the composite within Oracle SOA Suite. This can be done within SOA Composer or Oracle JDeveloper. Below is an Oracle JDeveloper example, where you link the WS-Policies using Configure SOA WS Policies at the project level in Oracle JDeveloper for each external reference. For example:

Configure SOA WS Policies

  • You then select the policy you want to use for the call. Remember you only use one of the policies you have configured on the Inbound Web Service. If you have a custom policy, that must be deployed to the Oracle SOA Suite and your Oracle JDeveloper instance to be valid for your composite. For example a list of policies is displayed and you select one:

Example Policy Selection

  • Edit the Policy to specify additional information. For example :

Editing Policy

  • At this point, specify which csf-map and csf-key you want to use for the call in the Override Value. In the example below the csf-key is specified. For example:

Example Key specification

The security has been setup for the composite. You have indicated the credentials (which can be managed from the console) and the policy to use can be attached to the composite to ensure that your security specification has been implemented.

Depending on the WS-Policy you choose to use, there may be additional transport and message protection settings you will need to specify (for example if you use policy specific encryption, outside the transport layer, you may need to specify the encryption parameters for the message). For full details of Oracle SOA Suite facilities, refer to the Oracle SOA Suite documentation.

Windows Oracle Services Using PowerShell

Michael Dinh - Wed, 2017-05-17 17:39

Lately, I have been getting feet wet with Windows.
I know GUI can be used but not very good to reproduce.

Here is now to find Stopped Oracle Windows Services and start them.

And if you want to use GUI, run Services.msc from command line.

Windows PowerShell
Copyright (C) 2012 Microsoft Corporation. All rights reserved.

PS C:\Users\oracle> hostname
minions

PS C:\Users\oracle> Get-Service -Name *oracle* | Where Status -eq "Stopped" | Format-List
Name                : Oracleagent12c1Agent
DisplayName         : Oracleagent12c1Agent
Status              : Stopped
DependentServices   : {}
ServicesDependedOn  : {}
CanPauseAndContinue : False
CanShutdown         : False
CanStop             : False
ServiceType         : Win32OwnProcess

Name                : OracleOraHome1ClrAgent
DisplayName         : OracleOraHome1ClrAgent
Status              : Stopped
DependentServices   : {}
ServicesDependedOn  : {}
CanPauseAndContinue : False
CanShutdown         : False
CanStop             : False
ServiceType         : Win32OwnProcess

Name                : OracleRemExecServiceV2
DisplayName         : OracleRemExecServiceV2
Status              : Stopped
DependentServices   : {}
ServicesDependedOn  : {}
CanPauseAndContinue : False
CanShutdown         : False
CanStop             : False
ServiceType         : Win32OwnProcess

PS C:\Users\oracle> Start-Service -name Oracleagent12c1Agent
WARNING: Waiting for service 'Oracleagent12c1Agent (Oracleagent12c1Agent)' to start...
WARNING: Waiting for service 'Oracleagent12c1Agent (Oracleagent12c1Agent)' to start...
WARNING: Waiting for service 'Oracleagent12c1Agent (Oracleagent12c1Agent)' to start...
WARNING: Waiting for service 'Oracleagent12c1Agent (Oracleagent12c1Agent)' to start...

PS C:\Users\oracle> Get-Service -name Oracleagent12c1Agent
Status   Name               DisplayName
------   ----               -----------
Start... Oracleagent12c1... Oracleagent12c1Agent

PS C:\Users\oracle> Get-Service -name Oracleagent12c1Agent
Status   Name               DisplayName
------   ----               -----------
Running  Oracleagent12c1... Oracleagent12c1Agent

PS C:\Users\oracle> Get-Service -Name *oracle* | Where Status -eq "Stopped" | Format-List
Name                : OracleOraHome1ClrAgent
DisplayName         : OracleOraHome1ClrAgent
Status              : Stopped
DependentServices   : {}
ServicesDependedOn  : {}
CanPauseAndContinue : False
CanShutdown         : False
CanStop             : False
ServiceType         : Win32OwnProcess

Name                : OracleRemExecServiceV2
DisplayName         : OracleRemExecServiceV2
Status              : Stopped
DependentServices   : {}
ServicesDependedOn  : {}
CanPauseAndContinue : False
CanShutdown         : False
CanStop             : False
ServiceType         : Win32OwnProcess

PS C:\Users\oracle> C:\app\oracle\product\agent12c\core\12.1.0.3.0\bin\emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 3
Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version     : 12.1.0.3.0
OMS Version       : 12.1.0.3.0
Protocol Version  : 12.1.0.1.0
Agent Home        : C:/app/oracle/product/agent12c/agent/core/12.1.0.3.0
Agent Binaries    : c:\app\oracle\product\agent12c\core\12.1.0.3.0
Agent Process ID  : 538684
Parent Process ID : 536640
Agent URL         : https://minions.local:1830/emd/main/
Repository URL    : https://cloud.local:4903/empbs/upload
Started at        : 2017-05-17 13:30:23
Started by user   : minions$
Last Reload       : (none)
Last successful upload                       : 2017-05-17 13:31:00
Last attempted upload                        : 2017-05-17 13:31:00
Total Megabytes of XML files uploaded so far : 0.03
Number of XML files pending upload           : 1
Size of XML files pending upload(MB)         : 0
Available disk space on upload filesystem    : 68.70%
Collection Status                            : Collections enabled
Heartbeat Status                             : Ok
Last attempted heartbeat to OMS              : 2017-05-17 13:30:41
Last successful heartbeat to OMS             : 2017-05-17 13:30:41
Next scheduled heartbeat to OMS              : 2017-05-17 13:31:41

---------------------------------------------------------------
Agent is Running and Ready

PS C:\Users\oracle> Services.msc

Webcast: "Migrating and Managing Customizations for Oracle E-Business Suite 12.2"

Steven Chan - Wed, 2017-05-17 15:23

Oracle University has a wealth of free recorded webcasts for Oracle E-Business Suite.  If you're looking for a primer on ensuring that your customizations work when you upgrade to EBS 12.2, see:

Have you created custom schemas, personalized or extended your Oracle E-Business Suite environment? Santiago Bastidas, Senior Principal Product Manager, discusses how to select the best upgrade approach for existing customizations. This session will help you understand the new customization standards required by the Edition-Based Redefinition feature of Oracle Database to be compliant with the Online Patching feature of Oracle E-Business Suite. You’ll learn about customization use cases, tools, and technologies you can use to ensure that all your customizations are preserved during and after the upgrade. You’ll also hear about reports you can run before the upgrade to detect and fix your customizations to make them 12.2-compliant. This material was presented at Oracle OpenWorld 2016. 

 
Categories: APPS Blogs

Smart Database Architecture for Software Development

Gerger Consulting - Wed, 2017-05-17 13:00

We are incredibly excited to announce that the amazing Toon Koppelaars from Oracle Real World Performance Team is hosting our next webinar: Smart Database Architecture for Software Development. Register at this link.





About the Webinar

Is the database a processing engine or a persistence layer? In this presentation we'll first go through a bit of history demonstrating how the database has been used in the past 30 years: at times it was a processing engine, and at other times it was just a persistence layer. 

Having witnessed many application development projects, we are convinced that the database ought to be used as a processing engine. The persistence layer approach, where all business logic is implemented outside the database has serious drawbacks in the areas of initial application development, ongoing maintenance, and most notably in the area of performance and scalability. 

We'll discuss these drawbacks, in particular the last one: We'll debunk once and for all that moving business logic out of the database benefits performance and scalability.



About the Presenter

Toon has been part of the Oracle eco-system since 1987. He is currently a member of Oracle's Real World Performance Team. The RWP-team troubleshoots application performance issues in and around the Oracle Database. The way applications currently use (or rather, abuse) the DBMS, is often at the root of these performance issues. Prior to joining the RWP team, Toon has been mainly involved in database application development. His special interests are: architecting applications for performance and scalability, database design, and business rules / constraints modeling. He is a long-time champion of using the database in a smart way, i.e. using the database as a processing engine.

Registration is free but space is limited.


Categories: Development

Delivery to Oracle Document Cloud Services (ODCS) Like A Boss

Tim Dexter - Wed, 2017-05-17 11:53
p { margin-bottom: 0.1in; direction: ltr; color: rgb(0, 0, 10); line-height: 120%; text-align: left; }p.western { font-family: "Liberation Serif",serif; font-size: 12pt; }p.cjk { font-family: "WenQuanYi Micro Hei"; font-size: 12pt; }p.ctl { font-family: "Lohit Devanagari"; font-size: 12pt; }

We have moved to a new blogging platform. This was a post from Pradeep that missed the cut over ...

In release 12.2.1.1, BI Publisher added a new feature - Delivery to Oracle Document Cloud Services (ODCS). Around the same time, BI Publisher was also certified against JCS 12.2.1.x and therefore, today if you have hosted your BI Publisher instance on JCS then we recommend Oracle Document Cloud Services as the delivery channel. Several reasons for this:

  1. Easy to configure and manage ODCS in BI Publisher on Oracle Public Cloud. No port or firewall issues.

  2. ODCS offers a scalable, robust and secure document storage solution on cloud.

  3. ODCS offers document versioning and document metadata support similar to any content management server

  4. Supports all business document file formats relevant for BI Publisher

When to use ODCS?

ODCS can be used for all different scenarios where a document need to be securely stored in a server that can be retained for any duration. The scenarios may include:

  • Bursting documents to multiple customers at the same time.

    • Invoices to customers

    • HR Payroll reports to its employees

    • Financial Statements

  • Storing large or extremely large reports for offline printing

    • End of the Month/Year Statements for Financial Institutions

    • Consolidated department reports

    • Batch reports for Operational data

  • Regulatory Data Archival

    • Generating PDF/A-1b or PDF/A-2 format documents

How to Configure ODCS in BI Publisher?

Configuration of ODCS in BI Publisher requires the  URI, username and password. Here the username is expected to have access to the folder where the files are to be delivered.


 

How to Schedule and Deliver to ODCS?

Delivery to ODCS can be managed through both - a Normal Scheduled Job and a Bursting Job.

A Normal Scheduled Job allows the end user to select a folder from a list of values as shown below

\

In case of Bursting Job, the ODCS delivery information is to be provided in the bursting query as shown below:

Accessing Document in ODCS

Once the documents are delivered to ODCS, they can be accessed by user based on his access to the folder, very similar to FTP or WebDAV access.

That's all for now. Stay tuned for more updates !

 

Categories: BI & Warehousing

Oracle Database standard Geo Location Support using Locator (included in every edition!)

Amis Blog - Wed, 2017-05-17 08:59

imageMany databases have native support for locations en geodata – and determining distance and closest locations (within a certain perimeter). Oracle Database has the [Graph and] Spatial Option – that supports even the most advanced and exotic forms of location related data querying (including multidimensional shapes and probably relativistic effects); this option comes on top of Enterprise Edition and carries additional costs. What may be not as well know is the Locator functionality that is part of every edition of the Oracle Database – including XE and SE, (without any additional costs) – with geo support as found in most databases. In this article I will give a very brief introduction of what this Locator feature can be used for.

See for extensive documentation on Locator:  Oracle Database 12c Documenation – Locator (and http://docs.oracle.com/cd/E11882_01/appdev.112/e11830/sdo_locator.htm#SPATL340 for Oracle Database 11g).

I will assume the legacy data model of DEPT and EMP (download DDL script for creating SCOTT’s database schema objects: scott_build.sql).

 

1. Prepare a table for Geo Spatial Data

— add geospatial data for departments (longitude, lattitude)

alter table dept
add (geo_location SDO_GEOMETRY)

SDO_GEOMETRY is an object type that describes and supports any type of geometry. Examples of SDO_GTYPE values include 2001 for a two-dimensional point. The SRID value 8307 is associated with the widely used WGS84 longitude/latitude coordinate system.

 

2. Add geo information to records in table

Now that a column has been added to hold the SDO_GEOMETRY object, we can start loading location data into the table.

update dept
set    geo_location = SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE (-96.8005, 32.7801,NULL), NULL, NULL)
where  loc = 'DALLAS'

update dept
set    geo_location = SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE (-73.935242, 40.730610,NULL), NULL, NULL)
where  loc = 'NEW YORK'

update dept
set    geo_location = SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE ( -71.0598, 42.3584,NULL), NULL, NULL)
where  loc = 'BOSTON'

update dept
set    geo_location = SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE (-87.6298, 41.8781,NULL), NULL, NULL)
where  loc = 'CHICAGO'

 

3. Prepare meta data in USER_SDO_GEOM_METADATA

For each spatial column (type SDO_GEOMETRY), you must insert an appropriate row into the USER_SDO_GEOM_METADATA view to reflect the dimensional information for the area in which the data is located. You must do this before creating spatial indexes

-- The USER_SDO_GEOM_METADATA view has the following definition:
-- (   TABLE_NAME   VARCHAR2(32),
--  COLUMN_NAME  VARCHAR2(32),
--  DIMINFO      SDO_DIM_ARRAY,
--  SRID         NUMBER
--);

-- insert dimensional information for the  spatial column
-- the dimensional range is the entire Earth, and the coordinate system is the widely used WGS84 (longitude/latitude) system (spatial reference ID = 8307

INSERT INTO USER_SDO_GEOM_METADATA 
(TABLE_NAME, COLUMN_NAME, DIMINFO, SRID) 
VALUES ('DEPT', 'GEO_LOCATION', 
   SDO_DIM_ARRAY 
     (SDO_DIM_ELEMENT('LONG', -180.0, 180.0, 0.5), 
     SDO_DIM_ELEMENT('LAT', -90.0, 90.0, 0.5)), 
   8307);

 

4. Create the Geo Spatial Index

Create index on the column geo_location that holds the SO_GEOMETRY object:

CREATE INDEX dept_spatial_idx 
ON dept(geo_location)
INDEXTYPE IS mdsys.spatial_index;

image

 

5. Start querying with Location based conditions

List all departments, ordered by their distance from Washington DC

SELECT d.loc
,      SDO_GEOM.SDO_DISTANCE
       ( SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE ( -77.0364, 38.8951,NULL), NULL, NULL) /* Washington DC */
       , d.geo_location
       , 0.005
       , 'unit=KM'
       ) "distance from Washington"
from   dept d
order 
by     2

image
We find all departments  within 500 km from Washington DC and get the distance for each department in the property distance in km :

with d as
( SELECT d.loc
  ,      SDO_GEOM.SDO_DISTANCE
         ( SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE ( -77.0364, 38.8951,NULL), NULL, NULL)
         , d.geo_location
         , 0.005
         , 'unit=KM'
         ) distance
  from   dept d
  order 
  by     2
)
select d.*
from   d
where  d.distance < 500

image

Find two closest neighbouring departments for NEW YORK:

 

SELECT /*+ LEADING(d) INDEX(dn dept_spatial_idx)  */ 
       d.deptno,
       d.loc,
       dn.deptno neighbour_deptno,
       dn.loc neighbour_loc,
       sdo_nn_distance (1) distance
FROM   dept d
       cross join
       dept dn
WHERE  d.deptno = 10 /* NEW YORK */
and    dn.deptno !=  10
AND    sdo_nn /* is dn in set of 3 closest neighbours to d */
       (dn.geo_location, d.geo_location, 'sdo_num_res=3', 1) = 'TRUE'
ORDER 
BY    distance;

(note: the hint in the first line is not required on Oracle Database 12c, but it is on 11g – see forum thread) Here are examples for the use of the SDO_NN operator.

 

image

image

Distance matrix, using pivot:

 

with distances as
(SELECT /*+ LEADING(d) INDEX(dn dept_spatial_idx)  */ 
       d.deptno,
       d.loc,
       dn.deptno neighbour_deptno,
       dn.loc neighbour_loc,
       trunc(sdo_nn_distance (1)) distance
FROM   dept d
       cross join
       dept dn
WHERE  sdo_nn /* is dn in set of 3 closest neighbours to d */
       (dn.geo_location, d.geo_location, 'sdo_num_res=3 unit=km', 1) = 'TRUE'
)
SELECT *
FROM   (SELECT loc, neighbour_loc, distance distance
        FROM   distances)
PIVOT  ( max(distance) AS distance 
         FOR (neighbour_loc) 
         IN ('NEW YORK' AS NEWYORK, 'BOSTON' AS BOSTON, 'CHICAGO' AS CHICAGO, 'DALLAS' as DALLAS)
        );

 

 

image

The post Oracle Database standard Geo Location Support using Locator (included in every edition!) appeared first on AMIS Oracle and Java Blog.

Unboxing the Future of Fishbowl’s On-premise Enterprise Search Offering: Mindbreeze InSpire

Back on April 3rd, Fishbowl announced that we had formed a partner relationship with Mindbreeze to bring their industry leading enterprise search solutions to Fishbowl customers. We will offer their Mindbreeze InSpire search appliance to customers looking for an on-premise solution to search internal file shares, databases, document management systems and other enterprise repositories.

Since that announcement, we have been busy learning more about Mindbreeze InSpire, including sending some members of our development team to their partner technical training in Linz, Austria. This also includes procuring our own InSpire search appliance  so that we can begin development of connectors for Oracle WebCenter Content and PTC Windchill. We will also begin using InSpire as the search system for our internal content as well.

Fishbowl’s Mindbreeze InSpire appliance arrived last week, and we wanted to share a few pics of the unboxing and racking process. We are very excited about the value that Mindbreeze InSpire will bring to customers, including the time savings of searching, and in many cases not finding, high-value information. Consider these stats:

  • 25% of employee’s time is spent looking for information – AIIM
  • 50% of people need to search 5 or more sources – AIIM
  • 38% of time is spent unsuccessfully searching and recreating content – IDC

Stay tuned for more information on Fishbowl’s software and services for Mindbreeze InSpire. Demos of the system are available today, so contact us below or leave a comment here if you would like to see it in action.

 

 

The post Unboxing the Future of Fishbowl’s On-premise Enterprise Search Offering: Mindbreeze InSpire appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

How to Configure Microsoft IIS with Oracle WebCenter

I was setting up a Oracle WebCenter 12c Suite in a local development environment utilizing a Windows Server 2012 R2 Operating System with a Microsoft SQL Server. Instead of using a OHS (Oracle HTTP Server), I wanted to try using Microsoft IIS (Internet Information Services) to handle the forwarding of sub-sites to the specified ports.  Since the Oracle applications run specified ports (ex. 16200 for Content Server), when a user requests the domain on the default ports (80 and 443) on browsers it won’t redirect to the content server – example: www.mydomain.com/cs vs. www.mydomain.com:16200/cs. The reason I chose to use IIS was because it is already a feature built-in to Windows Server, and thus is one less application to manage.

That being said, IIS and OHS perform in the same manner but are setup and configured differently based on requirements.  Oracle provides documentation about using the Oracle Plug-in for Microsoft IIS, but the content is pretty outdated on the Oracle site.  The page first references IIS 6.0, which was released with Windows Server 2003 in April 2003.  It has now ended its support as of July 14th, 2015. Lower on the page, they show steps for IIS on Windows Server 2012 R2, which got me started.  In the next part of this post, I will review the steps I took to get all functionality working, as well as the limitations/flaws I incurred.

Step 1: Install IIS on the Server

The first part was to install IIS on the server.  In Server 2012, open the Server Manager and select Add Roles and Features.  From there select the option to add the IIS components.

Step 2: Select Default Web Site

Once IIS has been installed, open it and select the Default Web Site.  If you right-click and select edit bindings, you can see the default site is binded to port 80, which is what we want since port 80 is the default port for all web applications.

Step 3: Select Application Pools

Following the instructions from Oracle, download the plug-in and put it in the system folder close to the root level on the desired drive.  For this blog, I have it in C:\IISProxy\.  For each server (Content Server, Portal, etc) you need to perform configurations in IIS.  Open IIS and navigate to the Application Pools section.  Select Add Application Pool and create a pool with a specific name for each server.  There needs to be separate application pools for specific port forwarding to work correctly.

Step 4: Configure Properties

Once created, open Windows Explorer and create a folder inside IISProxy called “CS.”  Copy all he plug-in files into the CS folder.  Now open the iisproxy.ini file and configure the properties to match your environment.  Make sure to configure the Debug parameter accordingly to tailor on your environment.

Step 5: Select the Created Application Pool

Open IIS and select the Default Web Site option.  Right-click and select Add Application.  Add the Alias name and select the Application Pool created above.  Set the physical path to the folder created above and make sure the connection is setup for pass-through authentication.

Step 6: Set Up Handler Mappings

Once OK has been selected, the application should now be displayed on the tree on the left.  The next step is to setup handler mappings for how IIS will handle requests coming in.  Click on the “cs” application you just created and on the main display there should be a Handler Mappings icon to click. Double click the icon.  This is where we will setup the routing of static files vs content server requests. On the right side, click the “Add Script Map” icon.  Add the request path of “*” and add the folder path to the iisproxy.dll.  Open the request restrictions and verify the “Invoke handler…” checkbox is unchecked.  Open the access tab and select the Script radio button.  Click OK and verify the mapping has been applied.

    

Step 7: Map Static Files

Next, we will setup the mapping for static files.  Click “Add Module Mapping” Add “*” for the request path, “StaticFileModule,DefaultDocumentModule,DirectoryListingModule” for the Module and give it a name.  Open request restrictions and select the file or folder radio option.  Navigate to the access tab and select the read radio button.  Click OK and verify the mapping was applied.

  

Step 8: Verify Mapping Execution

After the mappings have been setup, we need to verify they are executed in the correct order.  Do this by going to the back to the handler mappings screen and clicking “View Ordered List”

Step 9: Restart the IIS Server

After these steps are completed, restart the IIS server.  To do this, open command-prompt as an administrator and type “iisreset”.  Once restarted, you now should be able to view the content server on port 80.  If you have other redirects you would like to perform, you can perform the same steps above with a different name (ex. Portal, Inbound Refinery, Console, Enterprise Manager, etc).

With Oracle’s tutorial out-of-date and missing key steps, it was difficult to determine how to set everything up.  With some trial and error and investigation, I think I outlined in the 9 steps above how to help you quickly setup IIS with the WebCenter Suite on a Windows environment so specific port numbers are not needed.  Obviously with any technology decision, application evaluations should take place to determine if IIS or OHS is a better fit. Good luck, and leave a comment if you have any questions or need further clarification.

The post How to Configure Microsoft IIS with Oracle WebCenter appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Finished reading the Snowflake database documentation

Bobby Durrett's DBA Blog - Tue, 2017-05-16 13:36

I just finished reading the Snowflake database documentation and I thought I would blog about my impressions. I have not spent a lot of time using Snowflake so I can not draw from experience with the product. But, I read all the online documentation so I think I can summarize some of the major themes that I noticed. Also, I have read a lot of Oracle database documentation in the past so I can compare Oracle’s documentation to Snowflake’s. Prior to reading the online documentation I also read their journal article which has a lot of great technical details about the internals. So, I intend to include things from both the article and the documentation.

My observations fall into these major categories:

  • Use of Amazon Web Services instead of dedicated hardware for data warehouse
  • Ease of use for people who are not database administrators
  • Use of S3 for storage with partitioning and columnar storage
  • Lots of documentation focused on loading and unloading data
  • Lots of things about storing and using JSON in the database
  • Limited implementation of standard SQL – no CREATE INDEX
  • Computer science terms – from the development team instead of marketing?
  • Role of database administrator – understand architecture and tuning implications
  • Focus on reporting usage of resources for billing
  • JavaScript or SQL stored functions
  • Down side to parallelism – high consumption of resources
  • High end developers but still a new product
  • Maybe specialized purpose – not general purpose database

First let me say that it is very cool how the Snowflake engineers designed a data warehouse database from scratch using Amazon Web Services (AWS). I have worked on a couple of different generations of Exadata as well as HP’s Neoview data warehouse appliance and a large Oracle RAC based data warehouse so I have experience with big, expensive, on site data warehouse hardware. But Snowflake is all in Amazon’s cloud so you don’t have to shell out a lot of money up front to use it. It makes me think that I should try to come up with some clever use of AWS to get rich and famous. All of the hardware is there waiting for me to exploit it and you can start small and then add hardware as needed. So, to me Snowflake is a very neat example of some smart people making use of the cloud. Here is a pretty good page about Snowflake’s architecture and AWS: url

You would not think that I would be happy about a database product that does not need database administrators since I have been an Oracle database administrator for over 20 years. But, it is interesting how Snowflake takes some tasks that DBAs would do working with an on site Oracle database and makes them easy enough for a less technical person to do them.  There is no software to install because Snowflake is web-based. Creating a database is a matter of pointing and clicking in their easy to use web interface. Non-technical users can spin up a group of virtual machines with enormous CPU and memory capacity in minutes. You do not setup backup and recovery. Snowflake comes with a couple of built-in recovery methods that are automatically available. Also, I think that some of the redundancy built into AWS helps with recovery. So, you don’t have Oracle DBA tasks like installing database software, creating databases, choosing hardware, setting up memory settings, doing RMAN and datapump backups. So, my impression is that they did a good job making Snowflake easier to manage. Here is a document about their built-in backup and recovery: url

Now I get to the first negative about Snowflake. It stores the data in AWS’s S3 storage in small partitions and a columnar data format. I first saw this in the journal article and the documentation reinforced the impression: url1,url2. I’ve used S3 just enough to upload a small file to it and load the data into Snowflake. I think that S3 is AWS’s form of shared filesystem. But, I keep thinking that S3 is too slow for database storage. I’m used to solid state disk storage with 1 millisecond reads and 200 microsecond reads across a SAN network from a storage device with a large cache of high-speed memory. Maybe S3 is faster than I think but I would think that locally attached SSD or SSD over a SAN with a big cache would be faster. Snowflake seems to get around this problem by having SSD and memory caches in their compute nodes. They call clusters of compute nodes warehouses, which I think is confusing terminology. But from the limited query testing I did and from the reading I think that Snowflake gets around S3’s slowness with caching. Caching is great for a read only system. But what about a system with a lot of small transactions? I’ve seen Snowflake do very well with some queries against some large data sets. But, I wonder what the down side is to their use of S3. Also, Snowflake stores the data in columnar format which might not work well for lots of small insert, update, and delete transactions.

I thought it was weird that out of the relatively small amount of documentation Snowflake devoted a lot of it to loading and unloading data. I have read a lot of Oracle documentation. I read the 12c concepts manual and several other manuals while studying for my OCP 12c certification. So, I know that compared to Oracle’s documentation Snowflake’s is small. But, I kept seeing one thing after another about how to load data. Here are some pages: url1,url2. I assume that their data load/unload statements are not part of the SQL standard so maybe they de-emphasized documenting normal SQL constructs and focused on their custom syntax. Also, they can’t make any money until their customers get their data loaded so maybe loading data is a business priority for Snowflake. I’ve uploaded a small amount of data so I’m a little familiar with how it works. But, generally, the data movement docs are pretty hard to follow. It is kind of weird. The web interface is so nice and easy to use but the upload and download syntax seems ugly. Maybe that is why they have some much documentation about it?

Snowflake also seems to have a disproportionate amount of documentation about using JSON in the database. Is this a SQL database or not? I’m sure that there are Oracle manuals about using JSON and of course there are other databases that combine SQL and JSON but out of the relatively small Snowflake documentation set there was a fair amount of JSON. At least, that is my impression reading through the docs. Maybe they have customers with a lot of JSON data from various web sources and they want to load it straight into the database instead of extracting information and putting it into normal SQL tables. Here is an example JSON doc page: url

Snowflake seems to have based their product on a SQL standard but they did not seem to fully implement it. For one thing there is no CREATE INDEX statement. Uggh. The lack of indexes reminds me strongly of Exadata. When we first got on Exadata they recommended dropping your indexes and using Smart Scans instead. But, it isn’t hard to build a query on Exadata that runs much faster with a simple index. If you are looking up a single row with a unique key a standard btree index with a sequential, i.e. non-parallel, query is pretty fast. The lack of CREATE INDEX combined with the use of S3 and columnar organization of the data makes me think that Snowflake would not be great for record at a time queries and updates. Of course, an Oracle database excels at record at a time processing so I can’t help thinking that Snowflake won’t replace Oracle with its current architecture. Here is a page listing all the things that you can create, not including index: url

Snowflake sprinkled their article and documentation with some computer science terms. I don’t recall seeing these types of things in Oracle’s documentation. For example, they have a fair amount of documentation about HyperLogLog. What in the world? HyperLogLog is some fancy algorithm for estimating the number of rows in a large table without reading every row. I guess Oracle has various algorithms under the covers to estimate cardinality. But they don’t spell out the computer science term for it. At least that’s my impression. And the point of this blog post is to give my impressions and not to present some rigorous proof through extensive testing. As a reader of Oracle manuals I just got a different feel from Snowflake’s documentation. Maybe a little more technical in its presentation than Oracle’s. It seems that Snowflake has some very high-end software engineers with a lot of specialized computer science knowledge. Maybe some of that leaks out into the documentation. Another example, their random function makes reference to the name of the underlying algorithm: url. Contrast this with Oracle’s doc: url. Oracle just tells you how to use it. Snowflake tells you the algorithm name. Maybe Snowflake wants to impress us with their computer science knowledge?

Reading the Snowflake docs made me think about the role of a database administrator with Snowflake. Is there a role? Of course, since I have been an Oracle DBA for over 20 years I have a vested interest in keeping my job. But, it’s not like Oracle is going away. There are a bazillion Oracle systems out there and if all the new students coming into the work force decide to shy away from Oracle that leaves more for me to support the rest of my career. But, I’m not in love with Oracle or even SQL databases or database technology. Working with Oracle and especially in performance tuning has given me a way to use my computer science background and Oracle has challenged me to learn new things and solve difficult problems. I could move away from Oracle into other areas where I could use computer science and work on interesting and challenging problems. I can see using my computer science, performance tuning, and technical problem solving skills with Snowflake. Companies need people like myself who understand Oracle internals – or at least who are pursuing an understanding of it. Oracle is proprietary and complicated. Someone outside of Oracle probably can not know everything about it. It seems that people who understand Snowflake’s design may have a role to play. I don’t want to get off on a tangent but I think that people tend to overestimate what Oracle can do automatically. With large amounts of data and challenging requirements you need some human intervention by people who really understand what’s going on. I would think that the same would be true with Snowflake. You need people who understand why some queries are slow and how to improve their performance. There are not as many knobs to turn in Snowflake. Hardly any really. But there is clustering: url1,url2,url3. You also get to choose which columns fit into which tables and the order in which you load the data, like you can on any SQL database. Snowflake exposes execution plans and has execution statistics: url1,url2,url3. So, it seems that Snowflake has taken away a lot of the traditional DBA tasks but my impression is that there is still a role for someone who can dig into the internals and figure out how to make things go faster and help resolve problems.

Money is the thing. There are a lot of money related features in the Snowflake documentation. You need to know how much money you are spending and how to control your costs. I guess that it is inevitable with a web-based service that you need to have features related to billing. Couple examples: url1,url2

Snowflake has SQL and JavaScript based user defined functions. These seem more basic than Oracle’s PL/SQL. Here is a link: url

There are some interesting things about limiting the number of parallel queries that can run on a single Snowflake warehouse (compute cluster). I’ve done a fair amount of work on Oracle data warehouses with people running a bunch of parallel queries against large data sets. Parallelism is great because you can speed up a query by breaking its execution into pieces that the database can run at the same time. But, then each user that is running a parallel query can consume more resources than they could running serially. Snowflake has the same issues. They have built-in limits to how many queries can run against a warehouse to keep it from getting overloaded. These remind me of some of the Oracle init parameters related to parallel query execution. Some URLs: url1,url2,url3 In my opinion parallelism is not a silver bullet. It works great in proofs of concepts with a couple of users on your system. But then load up your system with users from all over your company and see how well it runs then. Of course, one nice thing about Snowflake is that you can easily increase your CPU and memory capacity as your needs grow. But it isn’t free. At some point it becomes worth it to make more efficient queries so that you don’t consume so many resources. At least, that’s my opinion based on what I’ve seen on Oracle data warehouses.

I’m not sure if I got this information from the article or the documentation or somewhere else. But I think of Snowflake as new. It seems to have some high-end engineers behind it who have worked for several years putting together a system that makes innovative use of AWS. The limited manual set, the technical terms in the docs, the journal article all make me think of a bunch of high-tech people working at a startup. A recent Twitter post said that Snowflake now has 500 customers. Not a lot in Oracle terms. So, Snowflake is new. Like any new product it has room to grow. My manager asked me to look into technical training for Snowflake. They don’t have any. So, that’s why I read the manuals. Plus, I’m just a manual reader by nature.

My impression from all of this reading is that Snowflake has a niche. Oracle tries to make their product all things to all people. It has every feature but the kitchen sink. They have made it bloated with one expensive add-on option after another. Snowflake is leaner and newer. I have no idea how much Snowflake costs, but assuming that it is reasonable I can see it having value if companies use it where it makes sense. But I think it would be a mistake to blindly start using Snowflake for every database need. You probably don’t want to build a high-end transactional system on top of it. Not without indexes! But it does seem pretty easy to get a data warehouse setup on Snowflake without all the time-consuming setup of an on premise data warehouse appliance like Exadata. I think you just need to prepare yourself for missing features and for some things not to work as well as they do on a more mature database like Oracle. Also, with a cloud model you are at the mercy of the vendor. In my experience employees have more skin in the game than outside vendors. So, you sacrifice some control and some commitment for ease of use. It is a form of outsourcing. But, outsourcing is fine if it meets your business needs. You just need to understand the pros and cons of using it.

To wrap up this very long blog post, I hope that I have been clear that I’m just putting out my impressions without a lot of testing to prove that I’m right. This post is trying to document my own thoughts about Snowflake based on the documentation and my experience with Oracle. There is a sense in which no one can say that I am wrong about the things that I have written as long as I present my honest thoughts. I’m sure that a number of things that I have written are wrong in the sense that testing and experience with the product would show that my first impressions from the manuals were wrong. For example, maybe I could build a transactional system and find that Snowflake works better than I thought. But, for now I’ve put out my feeling that it won’t work well and that’s just what I think. So, the post has a lot of opinion without a ton of proof. The links show things that I have observed so they form a type of evidence. But, with Oracle the documentation and reality don’t always match so it probably is the same with Snowflake. Still, I hope this dump of my brain’s thoughts about the Snowflake docs is helpful to someone. I’m happy to discuss this with others and would love any feedback about what I have written in this post.

Bobby

Categories: DBA Blogs

EBS 12.1 April 2017 Technology Stack Recommended Patch Collection Now Available

Steven Chan - Tue, 2017-05-16 11:17

The latest cumulative set of updates to the E-Business Suite 12.1 technology stack foundation utilities is now available in a new April 2017 Recommended Patch Collection (RPC):

Oracle strongly recommends that all E-Business Suite 12.1 users apply this set of updates.

What issues are fixed in this patch?

This cumulative Recommended Patch Collection contains important fixes for issues with the Oracle EBS Application Object Library (FND) libraries that handle password hashing and resets, Forms-related interactions, key flexfields, descriptive flexfields, and more.  

Bugs fixed by this patch include:

  • 10007122 - FRM-41058 ERROR OCCURS WITH CTRL+E KEYS WHEN THE CURSOR FOCUS IS ON THE BUTTON.
  • 10057139 - GSI: QUERY IN FND_GLOBAL CAUSES HIGH CPU/NODE CRASH DUE TO MUTEX WAIT
  • 10078872 - 1OFF:10057139:GSI: QUERY IN FND_GLOBAL CAUSES HIGH CPU/NODE CRASH DUE TO MUTEX W
  • 10098001 - 9828858 FORWARD PORT: CHECK EVENT ALERT  (ALECTC) IS NOT RUNNING
  • 10104874 - CONNECTION LEAKS FROM FNDGFM.JSP
  • 10113913 - STANDARD MANAGERS EXCEEDS THE MAXIMUM NUMBER OF PROCESSES
  • 10116616 - R12.1.2 : FND REQUEST SET COMPLETION STATUS IS NOT CORRECTLY DETERMINED
  • 10131650 - 9664961 FORWARD PORT: BRAZILIAN REQUEST GET ORA-1722 WHEN SCHEDULED PERIODIC REE
  • 10189376 - AUTOMATIC OU PARAMETER DEFAULTING WHILE SUBMITTING REQUEST FROM HTML NOT WORKING
  • 10252312 - SLOW TO SWITCH RESPONSIBILITIES IN 12.1.2 
  • 10301406 - 10105351 FORWARD PORT: ICM KEEPS ON OPENING A LARGE NUMBER OF CURSORS FOR THE SA
  • 10399418 - SUBMITTING REPORT FOR MULTIPLE LANGUAGES FAILS WITH APP-FND-01564
  • 11684796 - 1OFF:12.1.3:VALUE OF SENT DATE FIELD IN EMAIL NOTIFICATIONS INTERMITTENTLY POPUL
  • 11738560 - APPLICATION LISTENER AVAILABILITY IS NOT CHECKED FOR FAILOVER/FAILBACK IN PCP AF
  • 11767687 - THE CONCURRENT PROCESSING REQUEST INSTANCE / NODE AFFINITY OPTION DOES NOT WORK
  • 11767783 - INEFFICIENT SQL EXECUTED BY CRM
  • 11769977 - 11737592 FORWARD PORT: REVIVER.SH IS NOT STARTING THE ICM
  • 12311480 - XML REPORT OUTPUT IS NOT PRINTED WHEN USED IN REQUEST SET WITH 'PRINT TOGETHER' 
  • 12348600 - OAF : RBAC FOR ENG CONCURRENT PROGRAM
  • 12427010 - 12353506: ADCMCTL.SH SCRIPT IS UNABLE TO DETERMINE THE ICM STATUS
  • 12582633 - 11908164: ADAUTOCONFIG FAILLING ON SCRIPT AFCPCTX.SH. REQUEST LOG SHOWS ORA-0650
  • 12628319 - 12367883 FORWARD PORT: REMOTEFILE.TRANSFERFILE PREMATURELY DELETES RECEIVED FILE
  • 12666409 - USER WITH ROLE -COPY A SUBMITTED REQUEST GETTING FRM-41830
  • 12693467 - V$SESSION NOT POPULATING DATA PROPERLY FOR ACTION COLUMN FOR CONCURRENT REQUEST 
  • 12711866 - RO: 10020003 FORWARD PORT: REQUESTS MONITOR NOT USING 'VIEW' FIELD CORRECTLY
  • 12747284 - FNDCPPUR REQUEST FAILED DUE TO SIGNAL 8
  • 12776331 - FND_GSM_UTIL CHANGES FOR UPLOAD_CONTEXT_FILE REGISTER SERVICES
  • 12821441 - JAVA CONCURRENT PROGRAMS DOES NOT ALWAYS WRITE OUTPUT VIA FND_FILE
  • 12874866 - 12711618 FORWARD PORT: ALL REQUESTS INSERT INTO FND_CONC_PP_ACTIONS
  • 12932103 - 12815295 BACKPORT: CLEANUP EFFORT OF CLIENT SIDE SERVICE MANAGER FUNCTION AND EX
  • 12957954 - 11690591 FORWARD PORT: DEADLOCKS WHENEVER CONCURRENT MANAGERS START AFTER AN ENV
  • 13009610 - NEED TO INCREASE MAX VALUE OF FND_CONC_RELEASE_CLASSES_S
  • 13013531 - ACMP : EXCEPTION OCCURRED: ORA-00918: COLUMN AMBIGUOUSLY DEFINED
  • 13056071 - 1OFF:R12.ATG_PF.B.DELTA.3:PARAMETER1 IN FND_GRANTS NOT UPDATED AFTER USERNAME CH
  • 13066729 - COPY CONCURRENT REQUEST PDF TEMPLATE TO XLS GETTING FRM-40815
  • 13075711 - IN PCP ENV, AFTER THE FAIL OVER TO NODE 2, WE HAVE MULTIPLE FNDSM PROCESS.
  • 13262775 - SYNCH FILES 
  • 13353167 - WHEN SCHEDULING A CONCURRENT REQUEST THROUGH OAF SCREENS, 
  • 13371648 - CONCURRENT REQUEST ARE IN PENDING STANDBY STATE - CRM NOT RESOLVING
  • 13426254 - HIGH ENQ: TX - ROW LOCK CONTENTION ON FND_CONCURRENT_PROCESSES
  • 13520666 - FNDSVCRG STATUS COULD NOT BE DETERMINED AFTER RHEL5 SECURITY PATCH UPDATES
  • 13620594 - 1OFF:12.1.3:WF ENGINE SLOW PERFORMANCE PROCESSING TIMED-OUT ACTIVITIES
  • 13779426 - CP "GATHER TABLE STATISTICS" FOR GL.JE_BE_LINE_TYPE_MAP COMPLETES IN ERROR
  • 13804818 - 13688614: AFSLOAD.LCT DOESN'T DOWNLOAD DATA IF FUNCTION_NAME OR SUB_MEN
  • 13825341 - REMOVE UNNECESSARY XDO CODE DEPENDENCIES FROM OBSOLETE VO REPUBLISHFILEVO
  • 14046931 - WRONG DEFAULT TITLE WHEN ADDING REQUESTS SUMMARY SCREEN TO FAVORITES
  • 14128319 - SECURING ATTRIBUTE ICX_HR_PERSON_ID ASSIGNED VIA FND_USER_PKG
  • 14265552 - REPRINT/REPUBLISH USING USER_PRINTER_STLYE_NAME - ERROR INVALID STYLE
  • 14348816 - OFA DELIVER TO EMAIL ADDRESS DOES NOT ALLOW A HYPEN IN THE EMAIL ADDRESS
  • 14364164 - INTERNAL MONITOR KEEPS TRYING TO START INTERNAL CONCURRENT MANAGERS
  • 14526013 - CONCSUB BEHAVIOR SINCE ATG.RUP.7 AND CPU JAN-2012
  • 14545884 - WHEN SCHEDULING A CONCURRENT REQUEST THROUGH OAF SCREENS, 
  • 14629821 - 1OFF:10182664:12.1.3:UNDER HEAVY LOAD, MANAGERS SPIN AND CONSUME CPU 
  • 14673409 - EBS R12.1.3: TCA FIRST NAME/LAST NAME DO NOT SYNC TO LDAP IN SUPPLIER PORTAL
  • 14695512 - INVALID DECIMAL AND THOUSAND SEPARATOR
  • 14786043 - INTERNAL MONITOR KEEPS TRYING TO START INTERNAL CONCURRENT MANAGERS
  • 14791018 - REQUEST TO REMOVE OPTION WFDS_MODE=OWF ON FNDLOAD FOR PATCH 13622637
  • 14828518 - INTERNAL MONITOR KEEPS TRYING TO START INTERNAL CONCURRENT MANAGERS
  • 14828523 - PREREQ PATCH FOR AFCMGR.ODF
  • 14841198 - IPP PRINTER OPTIONS SET INCORRECTLY FOR DELIVERY
  • 15898572 - FND_USER_PKG INVALID AFTER PATCH 10024223
  • 15959817 - APPLICATION HAS STARTED PRODUCING MUCH MORE ARCHIVE REDO
  • 15972360 - NO. OF RECIPIENTS RESTRICTED TO FIVE IN SSWA NOTIFICATIONS WINDOW
  • 15981176 - ISSUES AFTER APPLYING FAILOVER PATCH 14828518:R12.FND.B
  • 16311718 - PROFILE - CONCURRENT: SHOW REQUESTS SUMMARY AFTER EACH REQUEST SUBMISSION NOT WO
  • 16602978 - STANDARD MANAGER ACTUAL AND TARGET PROCESSES ARE DIFFERENT.
  • 16735285 - SERVICE MANAGER GOES DOWN FREQUENTLY AFTER 13903857 AND 15981176
  • 16818306 - REPRINT/REPUBLISH REQUEST FAILS  JAVA.SQL.SQLEXCEPTION: NO CORRESPOND LOB DATA
  • 16880989 - +P4 FD: INDIA: GOING TO NEXT SCREEN SELECTING MULTIPLE LANGUAGE WHEN SCHEDULING 
  • 16946854 - REQUEST SUBMITTED BY CUSTOM RESP AND CUSTOM DATA GROUP CAUSES FNDLIBR TO COREDUM
  • 17002231 - ERROR IN OPP LOG WHEN "DELIVERY OPT" OPTION CHOSEN JAVA.LANG.NULLPOINTEREXCEPTIO
  • 17189881 - FND_STATS.RESTORE_SCHEMA_STATS FOR ALL SCHEMA IS FAILED
  • 17279094 - REQUESTS IN FRAMEWORK FOR A FUTURE DATE START IMMEDIATELY
  • 17287546 - UNABLE TO SELECT AM/PM WHEN TRYING TO SCHEDULE CONCURRENT REQUESTS
  • 17758638 - AFTER RUNNING FNDCPASS TO CHANGE THE ORACLE APPLICATION ACCOUNT PASSWORDS USERS 
  • 18071903 - POST MIXED CASE PSWRD  ON DB AND CLONE ON HASHED APPS CAN'T CHANGE APPLSYS PSWRD
  • 18083491 - PASSWORD RESETTING OF EXISTING USER IS NOT WORKING
  • 18137744 - FNDCPASS NOT CHANGING PASSWORD ON CLONED EBS R12.2.3
  • 18182723 - NUMBER OF ARGUMENTS INCREASE FOR SCHEDULED JOB, THE PROGRAM RUNS INTO WARNING & 
  • 18332973 - ADVANCED SEARCH CAN'T QUERY FOR SAME START AND END DATE
  • 18383570 - FNDCPASS NOT CHANGING PASSWORD AFTER UPGRADE TO 12.2.3
  • 18977939 - GETTING ERROR WHEN START CM USING ADSTRTAL.SH CONCOPER/CONCOPER -SECUREAPPS
  • 19048604 - GETTING ERROR WHEN START CM USING ADSTRTAL.SH CONCOPER/CONCOPER -SECUREAPPS
  • 19064976 - NLS:R:TST122:REGRESS:XDO PREVIEW IS WITH THE TOP TEMPLATE LANGUAGE
  • 19065069 - WHEN SCHEDULING A CONCURRENT REQUEST THROUGH OAF SCREENS, 
  • 19065267 - REPRINT/REPUBLISH USING USER_PRINTER_STLYE_NAME - ERROR INVALID STYLE
  • 19065293 - OAF : RBAC FOR ENG CONCURRENT PROGRAM
  • 19080080 - REPRINT/REPUBLISH REQUEST FAILS  JAVA.SQL.SQLEXCEPTION: NO CORRESPOND LOB DATA
  • 19080122 - IPP PRINTER OPTIONS SET INCORRECTLY FOR DELIVERY
  • 19211176 - CP OAF CONSOLIDATED 12.1.3.1 PATCH
  • 19539697 - WRONG VALUE RETURNED FOR THE RECORD SELECTED IN THE LOV
  • 20118026 - FORM PERSONALIZATIONS(FNDCUSTM ) VALUE CANNOT BE SAME BY CHOOSE OR MANUAL.
  • 20719878 - BUILTIN RAISE_FORM_TRIGGER_FAILURE ERROR ONE OR MORE REQUIRED FIELDS ARE MISSING
  • 21044265 - APPSTAND.FMB CALLS .FND_JAF_MESSAGE  WITH  APPLSYS
  • 21612876 - CROSS VALIDATION  PERFORMANCE ISSUES
  • 22220582 - UNABLE TO DISPLAY SIT DATA AFTER UPGRADE FROM 11I TO 12.1.3 RUP8
  • 22394026 - SECURITY RULE, INITIAL ENTRY OK, BUT ALLOWED TO OVERRIDE LATER
  • 23115501 - 1OFF:12.2.4:APP-FND-01023 THE FOLLOWING REQUIRED FIELD DOES NOT HAVE A VALUE
  • 23586683 - CCID NOT SAVED WHEN ACCOUNT SEGMENTS ARE CHANGED USING THE ACCOUNTING FLEX
  • 25107367 - UNABLE TO ADD NEW MESSAGE TYPE FORMS PERSONALIZATION ACTIONS ON TOP PF EXISTING
  • 25190067 - AFTER 23601325 WHAT DOES CROSS-VALIDATION RULE VIOLATION REPORT (ENHANCED) DO?
  • 25381217 - AFTER PATCH 25107367 PERSNZN FORM DO YOU WANT TO SAVE THE CHANGES YOU HAVE MADE
  • 3400667 - WISH LIST: DYNAMIC PARAMETERS
  • 7109984 - ORG-LEVEL PROFILE VALUE NOT RETURNED WHEN ORG_ID IS NOT SET
  • 7227733 - FND NEEDS CLARIFICATION AS TO HOW R12 FUNCTIONALITY OF ORG_ID WORKS
  • 9301929 - 9042119 FORWARD PORT: FND_CONCURRENT_REQUESTS TABLE IS BEING ACCESSED BY ALL THE
  • 9560529 - 9109247 FORWARD PORT: SCHEDULED REQUESTS STILL RUN FOR END_DATED USERS 100% CPU
  • 9755236 - OPP WARNING FLAGS NOT BEING SET CORRECTLY

Related Articles

Categories: APPS Blogs

EBS 12.2 April 2017 Technology Stack Recommended Patch Collection Now Available

Steven Chan - Tue, 2017-05-16 11:08

The latest cumulative set of updates to the E-Business Suite 12.2 technology stack foundation utilities is now available in a new April 2017 Recommended Patch Collection (RPC):

Oracle strongly recommends that all E-Business Suite 12.2 users apply this set of updates.

What issues are fixed in this patch?

This cumulative Recommended Patch Collection contains important fixes for issues with the Oracle EBS Application Object Library (FND) libraries that handle password hashing and resets, Forms-related interactions, key flexfields, descriptive flexfields, and more.  

Bugs fixed by this patch include:

  • 18071903 - POST MIXED CASE PSWRD  ON DB AND CLONE ON HASHED APPS CAN'T CHANGE APPLSYS PSWRD
  • 18083491 - PASSWORD RESETTING OF EXISTING USER IS NOT WORKING
  • 18137744 - FNDCPASS NOT CHANGING PASSWORD ON CLONED EBS R12.2.3
  • 18383570 - FNDCPASS NOT CHANGING PASSWORD AFTER UPGRADE TO 12.2.3
  • 19248704 - 17908376:NEED THE ONE OFF PATCH FOR 12.2.3
  • 19259764 - ERROR WHEN OPENING FORMS IN IE8 ON MULTI-NODE EBS 12.2.3
  • 19891697 - PERFORMANCE PROBLEMS RESULTS SET CACHE
  • 19899452 - R12.2.3 GETTING AP DFF ERROR - THE MAXIMUM VALUE SIZE FOR SEGMENT IS X. TRUNCATI
  • 20537212 - VALUES IN ITEM CODES ARE NOT VISIBLE ON APPLYING KEY FLEXFIELD SECURITY RULES
  • 20814982 - DEFAULTING DFF SEGMENT BEHAVIOR IS DIFFERENT FROM 11I
  • 21612876 - CROSS VALIDATION  PERFORMANCE ISSUES
  • 22220582 - UNABLE TO DISPLAY SIT DATA AFTER UPGRADE FROM 11I TO 12.1.3 RUP8
  • 22550312 - OVER 2300 CONTEXTS DEFINED CAUSES FNDFFVGN SIGNAL 11
  • 23115501 - 1OFF:12.2.4:APP-FND-01023 THE FOLLOWING REQUIRED FIELD DOES NOT HAVE A VALUE
  • 23586683 - CCID NOT SAVED WHEN ACCOUNT SEGMENTS ARE CHANGED USING THE ACCOUNTING FLEX
  • 23601325 - 12.2.4 AFTER 23115501 FNDRXR PERFORMANCE STILL EXISTS
  • 24442779 - RBAC MODEL SETUP USAGES FOR MOBILE APPS
  • 25107367 - UNABLE TO ADD NEW MESSAGE TYPE FORMS PERSONALIZATION ACTIONS ON TOP PF EXISTING
  • 25190067 - AFTER 23601325 WHAT DOES CROSS-VALIDATION RULE VIOLATION REPORT (ENHANCED) DO?
  • 25242246 - FLEXFIELD VIEW GENERATOR GIVES SIGNAL 11 ERROR WHEN CREATE MORE 2656 CONTEXTS
  • 25381217 - AFTER PATCH 25107367 PERSNZN FORM DO YOU WANT TO SAVE THE CHANGES YOU HAVE MADE

Related Articles

Categories: APPS Blogs

Fastest creation of a Lean VirtualBox VM Image with Oracle Database 11gR2 XE, the Node.JS 7.x and the Oracle DB Driver for Node

Amis Blog - Tue, 2017-05-16 11:05

For a workshop on Node.js I needed a VM to demonstrate and students try out the Oracle DB Driver for Node. I wanted a lean VM with the bare minimum: Oracle Database XE, Node, the Oracle DB Driver for Node and the Git client (for fetching sources from GitHub). I stumbled across the OXAR repository in GitHub (https://github.com/OraOpenSource/OXAR ) – Oracle XE & APEX build script along with images for popular cloud platforms http://www.oraopensource.com/oxar/ . Using the sources I found here, I could create my VM in a few simple, largely automated steps. I ended up with a 4.5 GB sized VM image (which exports as a 1.5 GB appliance) that runs in 1 GB. It is more than adequate for my needs.

The steps – for myself if I need to go through them again and of course for you, the reader, to also create this handsome, useful VM.

The steps for creating your own VM image are as follows:

1. make sure that you have Vagrant and VirtualBox installed locally (https://www.vagrantup.com/ and https://www.virtualbox.org/)

2. get the OXAR repository content

git clone https://github.com/OraOpenSource/OXAR

3. Download Oracle 11gR2 XE  installer for Linux from OTN: http://www.oracle.com/technetwork/database/database-technologies/express-edition/downloads/index.html and copy the downloaded file oracle-xe-11.2.0-1.0.x86_64.rpm.zip (the Linux installer for Oracle 11gR2 XE Database downloaded from OTN) to the OXAR/files directory
image

4. edit the file config.properties in the OXAR root directory

SNAGHTML93536f

– set parameter OOS_ORACLE_FILE_URL to file:///vagrant/files/oracle-xe-11.2.0-1.0.x86_64.rpm.zip and save the change:

OOS_ORACLE_FILE_URL=file:///vagrant/files/oracle-xe-11.2.0-1.0.x86_64.rpm.zip

image

Use the OOS_MODULE_XXX flags to specify which components should be installed. Here I have chosen not to install APEX and NODE4ORDS.

5. run vagrant using the statement:

vagrant up

this will run for a file, download the CentOS base image and create the VM (with NAT network configuration), install all of Git client, Oracle 11gR2 XE Database, Node and Node OracleDB Driver
SNAGHTML9564ec

6. after rebooting the system, the VM will be started (or you can start it using vagrant up again or by using the VirtualBox manager).

SNAGHTML973989

You can start an SSH session into it by connecting to localhost:50022, then login to Linux using vagrant/vagrant

image

7. connect to the database using sqlplus hr/oracle

image

8. Try out Node.js

image

9. To try out Node against Oracle Database using the driver, you can clone the GitHub Repository:

git clone https://github.com/lucasjellema/nodejs-oracledbdriver-quickstart

image

Next, cd into the newly created directory

image

and run the file select.js:

image

10. To try out PL/SQL as well:

Create the procedure get_salary using the file get_salary_proc.sql

 

image

Run Node program plsql.js:

image

The post Fastest creation of a Lean VirtualBox VM Image with Oracle Database 11gR2 XE, the Node.JS 7.x and the Oracle DB Driver for Node appeared first on AMIS Oracle and Java Blog.

performance tuning views

Tom Kyte - Tue, 2017-05-16 07:26
My Main Problem is I want to see all the details about running or executed a query or procedure so i can take action like performance tuning and query optimizations. So what are the views and tables available in oracle 11g R2 Database so i can fou...
Categories: DBA Blogs

Timestamp + interval arithmetics fails for precision over 6

Tom Kyte - Tue, 2017-05-16 07:26
Hi team, I have a problem with adding/subtracting Interval datatype to Timestamp datatype, when precision of the Interval is between 7 and 9. It seems that it doesn't work correctly, see results 'EQ' (equals=wrong) in below example. The second q...
Categories: DBA Blogs

Active Tables,

Tom Kyte - Tue, 2017-05-16 07:26
Hello, I have a need to find out "active" tables in a 11.2 (and 12.1) database. Active means - a table that has been involved in any of the DML or SELECT statement. I could use v$sql to get the full text of the statement and figure out. Besid...
Categories: DBA Blogs

procedure call count

Tom Kyte - Tue, 2017-05-16 07:26
I need a query to count a specific procedure calls in each session over all schemas per hour. this procedure exist in a package which might be called concurrently by different session in different schemas. what i'm doing now is to collect the users...
Categories: DBA Blogs

STIGS, SCAP, OVAL, Oracle Databases and ERP Security

Last week’s unprecedented ransomware cyber attacks (http://preview.tinyurl.com/lhjfjgk) caught me working through some research on security automation. The cyber attacks evidently were attributed to an unpatched Windows XP vulnerability. When challenged with securing 1,000s of assets such as all the Windows desktops and Linux servers in an organization, automation quickly becomes a requirement.

Automation is increasingly coming up in our client conversations about how to secure the technology ‘stack’ supporting large ERP implementations such as the Oracle E-Business Suite, PeopleSoft, and SAP. For example, how do you from a security professional perspective, communicate an objective risk assessment comprehensive of both the secure baseline configuration (control adherence/violation) and security patch levels (patch/unpatched CVEs) for the Linux operating systems, virtualization software, web server, database and the ERP application itself? Without automation, it is not feasible to promptly produce risk-based assessments of the complete technology stack and to produce results that are readily expressed in a common risk measurement (e.g. CVE) not requiring deep subject matter expertise.

Automation, however, can only be considered after requirements have been defined. I have long used Security Technical Implementation Guides (STIGs) in both my research and work with clients to define security requirements. STIGs are secure configuration standards developed by the US Department of Defense for products such as the Oracle RDBMS and are freely available (http://iase.disa.mil/stigs/Pages/index.aspx). While most clients do not need their databases hardened to military specifications, STIGs are an invaluable source of security best practice thinking.

STIGs (security checklists) are only available in xml format – not PDF files. DISA does provide a utility to view and work with STIGs (http://iase.disa.mil/stigs/Pages/stig-viewing-guidance.aspx) which allows you to manually execute the checklist, record your findings and then export the results. See this YouTube (https://www.youtube.com/watch?v=-h_lj5sWo4A) posting for a great summary of the STIG Viewer and how to use it.

Security Content Automation Protocol (SCAP)

To answer the question of how do you automate STIG and/or security checklists, again the Department of Defense has thought through the challenges and has created the Security Content Automation Protocol (SCAP).

SCAP is a multi-purpose framework to automate the security scanning of configurations, vulnerabilities, patch checking and compliance. SCAP content is developed by the National Institute of Standards and Technologies (NIST) and the components are described in the table below. The key point is that SCAP security content (checklists) is free and that the SCAP content scanning tools are available both in open source and commercial options.

SCAP Component

Description eXtensible Checklist Configuration Description Format (XCCDF) XML-based language for specifying checklists and reporting the results of checklist evaluations. Open Vulnerability and Assessment Language (OVAL) XML-based language for specifying test procedures to detect machine state Common Vulnerabilities and Exposures (CVE) Nomenclature and dictionary of security-related security flaws Common Configuration Enumeration (CCE) Nomenclature and dictionary of software security configuration issues Common Vulnerability Scoring System (CVSS) Methodology for measuring the relative security of software flaws Open Checklist Interactive Language (OCIL) XML-based language for specifying security checks that require human interaction or that otherwise cannot be bundled by OVAL Asset Reporting Format (ARF) Standardized data model for sharing information about assets to facilitate the reporting, correlating, and fusing of asset security information.   OpenSCAP

There are many tools, Integrigy’s AppSentry included (https://www.integrigy.com/products/appsentry), that will perform a STIG scan of an Oracle database. The question I was researching this week, is could I use a single SCAP tool to automate the scanning of both the Linux server and the database as well as possibly ERP configurations for PeopleSoft and/or the Oracle E-Business Suite – can could I possibly do this with open source software?

The first tool I considered was OpenSCAP (https://www.open-scap.org/). This open source tool is easy to install either on your laptop or Linux database server and has remote scanning capabilities. The example below shows the capabilities of the GUI tool ‘SCAP Workbench’ and the freely available content that is installed by default for scanning a Linux server.

This exercise quickly confirmed that there is a great deal of security automation available for Linux system security configurations. Here, though, is where I hit a wall: could OpenSCAP work with Oracle databases? While the SCAP standards clearly showed support for scanning SQL database configurations using OVAL’s SQL probes (e.g. sql_test, sql57_test etc…), I may be corrected, but the standard build of OpenSCAP do not appear to include the SQL probes.

 

JOVAL

To obtain the SQL probes for SCAP scanning of database configurations, after some research, I obtained an evaluation copy of Joval Professional (http://jovalcm.com/). Joval describes themselves as allowing you to Scan anything from anywhere and to allow continuous configuration assessments for developers, enterprises, content authors and security professionals.

The installation of Joval Professional was quick and I was able to scan my laptop and remotely scan the remote Oracle Linux server without issues. The screen shot below shows the results of the remote scan of the Linux server running the Oracle RDBMS.

With a bit of experimentation (and great customer service from Joval), I was able to quickly prove I could develop OVAL content for automated SCAP scanning of Oracle databases, either for standard database security checks or for Oracle E-Business and/or PeopleSoft configurations. One key concern with the proof-of-concept is that connection string hardcodes the user name and password. The hardcoding is certainly a security issue, but JOVAL (as well as OpenSCAP) offers python bindings. The screen shot below is a single OVAL scan that included two SQL checks as well as checks against content in the sqlnet.ora file using the OVAL probe: textfilecontent54_test. 

My OVAL definition is referenced below. I am providing it as an example for others. The key points you will know is for the JOVAL connection string for Oracle:

Engine:  oracle
Version values: 11.2.0, 11.1.0, 10.2.0, 10.1.0, 9.2.0, 9.0.1
Connection string (do not use JDBC syntax): user=<username>;password=<password>;SID=<instance name>

If you want to replicate the proof-of-concept:

  1. Download a trial version of Joval Professional.
  2. Run a scan of your local laptop
  3. Run a remote scan of Linux server running your Oracle RDBMS
  4. Edit sample benchmark file (here) for your database
  5. Upload the edited sample benchmark into Joval
  6. Run the sample benchmark scan
What Next?

Having proven I can use OVAL to write Oracle and ERP audit checks, I will spend a bit more time expanding the POC. I am also interested in automation options for Joval and OpenSCAP exports to a NoSQL database such as MongoDB using the Asset Reporting Format (ARF) (https://scap.nist.gov/specifications/arf/). Both Joval and OpenScap have standard functionality to export results using ARF.

If you have any questions, please contact us at info@integrigy.com

-Michael Miller, CISSP-ISSMP, CCSP, CCSK

References

Sample Oracle OVAL benchmark definition: SCAP OVAL Example Check for Oracle

SCAP

NIST SCAP site: https://scap.nist.gov/

SCAP content: https://nvd.nist.gov/ncp/repository?scap

Oracle Linux Security Guide – Using OpenSCAP: https://docs.oracle.com/cd/E37670_01/E36387/html/ol-scap-sec.html

Great summary of SCAP: https://energy.gov/sites/prod/files/cioprod/documents/Technical_Introduction_to_SCAP_-_Charles_Schmidt.pdf

OVAL

Writing OVAL content https://oval.mitre.org/documents/docs-07/Writing_an_OVAL_Definition.pdf

OVAL tutorial https://nvd.nist.gov/scap/docs/conference%20presentations/workshops/OVAL%20Tutorial%202%20-%20%20Definitions.pdf

 
 
 
 
 
 
SCAP OVAL, Security Strategy and Standards, FISMA/DOD, Oracle Database, Oracle E-Business Suite
Categories: APPS Blogs, Security Blogs

Pages

Subscribe to Oracle FAQ aggregator