Feed aggregator

Generate Dates Between Two Given Dates Through Cursor

Tom Kyte - 14 hours 13 min ago
I have a table as below Create Session_Detail(ID,Date,Day,Status) I need a button where i write the cursor which is reading start_date and end_date and generate the dates in session_detail as per given start_date and end_date??
Categories: DBA Blogs

ADF Performance on Docker - Lighting Fast

Andrejus Baranovski - 16 hours 7 min ago
ADF performance depends on server processing power. Sometimes ADF is blamed for poor performance, but in most of the cases real issue is related to poor server hardware, bad programming style or slow response from DB. Goal of this post is to show how fast ADF request could execute and give away couple of suggestions how to minimize ADF request time. This would apply to ADF application running on any environment, not only Docker. I'm using ADF Alta UI based list application with edit fragment.

Rule number one - enable response compression. This will allow to transfer less data and obviously response will execute faster - shorter content download time. See in the screenshot below - JS file is compressed to 87 KB from original 411 KB. Initial page load in ADF generates around 3 MB of content (if this is very first access and static content is not cached yet on client side). With compression initial load of 3 MB will be around 300 - 400 KB. Thats a big difference. In this example ADF page opens in 1.2 seconds (this is equal to client side JS applications, if static content is downloaded on first access):


You can enable content response compression in WebLogic console (will be applied for all deployed Web apps). Go to domain configuration, Web Applications section:


Select checkbox to enable GZIP compression and provide a list of content types to be compressed:


Thats it - content compression is set.

When I navigate to edit fragment - request is executed in 305 ms. Thanks to fast Docker engine (running on Digital Ocean - Oracle ADF on Docker Container) and content response compression: 3.44 KB transferred for 14.49 KB original content:


Let's try Save operation. I changed Hire Date attribute and then pressed Save button. This will trigger Commit operation in ADF, push data to ADF BC and then execute DML statement with commit in DB. All these steps are completed in 113 ms.


Don't believe anyone who says ADF is slow. As you can see, ADF request is very fast fundamentally - but of course it can become slow, if you add a lot of data fetch and processing logic on top (blame yourself). Client side JS application would not run faster, if it would call backend REST service to save data. The only advantage of JS client side application in this case would be that it executes backend REST call asynchronously, while ADF calls requests in synchronous manner. However, it all depends - sometimes asynchronous calls are not suitable for business logic either.

How come ADF BC call to DB completes so fast? For that we need to check Data Source Connection Delay Time on WLS. In Docker (Digital Ocean) environment it is ridiculously short (thats very good): 66 ms. Check same on your server (go to Data Source monitoring in WLS console), longer delay time means slower response from DB and slower ADF performance:


Navigation back to the list runs in 356 ms, with 197.96 KB of content compressed to 10.47 KB. This is very fast, 350 ms response time is something that user would not notice (almost equal to processing on client side):


To optimize ADF performance, make sure you are using ChangeEventPolicy = NONE for iterators in Page Definitions:

Migrate EBS (R12) to Cloud , SSL & GoldenGate Install

Online Apps DBA - 16 hours 22 min ago

[K21Academy Weekly Newsletter] 171116 Subject: Migrate EBS(R12) To Cloud, SSL & GoldenGate Install In this weeks issue, you will find:- 1. Migrating Oracle EBS (R12) to Cloud ? 10 Things You must consider before Migration (Lift & Shift) 2. SSL in Oracle Fusion Middleware (WebLogic, OHS, SOA, OAM, OID, OVD etc 3. Oracle GoldenGate: Installation […]

The post Migrate EBS (R12) to Cloud , SSL & GoldenGate Install appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Partner Webcast – Lifting & Shifting Oracle Applications to Oracle Cloud

Oracle Applications Unlimited is Oracle’s commitment to continuously innovate in current applications while also delivering the next generation of cloud applications. Many enterprises have...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Great Britain and Northern Ireland February 2018 Dates: “Oracle Indexing Internals and Best Practices” Seminar (Battle For Britain)

Richard Foote - 18 hours 14 min ago
Attention Oracle Professionals in the United Kingdom !! I have now finalised all the dates and venues for a series of my popular and critically acclaimed “Oracle Indexing Internals and Best Practices” seminar I’ll be running in the UK in February 2018. I’m extremely excited as this will be the first time I’ve delivered this […]
Categories: DBA Blogs

CBO, FIRST_ROWS and VIEW misestimate

Yann Neuhaus - Thu, 2017-11-16 23:36

There are several bugs with the optimizer in FIRST_ROWS mode. Here is one I encountered during a 10.2.0.4 to 12.2.0.1 migration when a view had an ‘order by’ in its definition.

Here is the test case that reproduces the problem.

A big table:

SQL> create table DEMO1 (n constraint DEMO1_N primary key,x,y) as select 1/rownum,'x','y' from xmltable('1 to 1000000');
Table DEMO1 created.

with a view on it, and that view has an order by:

SQL> create view DEMOV as select * from DEMO1 order by n desc;
View DEMOV created.

and another table to join to:

SQL> create table DEMO2 (x constraint DEMO2_X primary key) as select dummy from dual;
Table DEMO2 created.

My query reads the view in a subquery, adds a call to a PL/SQL function, and joins the result with the other table:


SQL> explain plan for
select /*+ first_rows(10) */ *
from
( select v.*,dbms_random.value from DEMOV v)
where x in (select x from DEMO2)
order by n desc;
 
Explained.

You can see that I run it with FIRST_ROWS(10) because I actually want to fetch the top-10 rows when ordered by N. As N is a number and I have an index on it and there are no nulls (it is the primary key) I expect to read the first 10 entries from the index, call the function for each of them, then nested loop to the other tables.

In the situation I encountered it, this is what was done in 10g but when migrated to 12c the query was very long because it called the PL/SQL function for million of rows. Here is the plan in my example:


SQL> select * from dbms_xplan.display(format=>'+projection');
 
PLAN_TABLE_OUTPUT
-----------------
Plan hash value: 2046425878
 
--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 21 | | 7 (0)| 00:00:01 |
| 1 | NESTED LOOPS SEMI | | 1 | 21 | | 7 (0)| 00:00:01 |
| 2 | VIEW | DEMOV | 902 | 17138 | | 7 (0)| 00:00:01 |
| 3 | SORT ORDER BY | | 968K| 17M| 29M| 6863 (1)| 00:00:01 |
| 4 | TABLE ACCESS FULL | DEMO1 | 968K| 17M| | 1170 (1)| 00:00:01 |
| 5 | VIEW PUSHED PREDICATE | VW_NSO_1 | 1 | 2 | | 0 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | DEMO2_X | 1 | 2 | | 0 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
6 - access("X"="V"."X")
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - (#keys=0) "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 2 - "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 3 - (#keys=1) INTERNAL_FUNCTION("N")[22], "X"[CHARACTER,1], "Y"[CHARACTER,1] 4 - "N"[NUMBER,22], "X"[CHARACTER,1], "Y"[CHARACTER,1]

A full table scan of the big table, with a call to the PL/SQL function for each row and the sort operation on all rows. Then the Top-10 rows are filtered and the nested loop operates on that. But you see the problem here. The cost of the ‘full table scan’ and the ‘order by’ has been evaluated correctly, but the cost after the VIEW operation is minimized.

My interpretation (but it is just a quick guess) is that the the rowset is marked as ‘sorted’ and then the optimizer considers that the cost to get first rows is minimal (as if it were coming from an index). However, this just ignores the initial cost of getting this rowset.

I can force with a hint the plan that I want – index full scan to avoid a sort and get the top-10 rows quickly:

SQL> explain plan for
select /*+ first_rows(10) INDEX_DESC(@"SEL$3" "DEMO1"@"SEL$3" ("DEMO1"."N")) */ *
from
( select v.*,dbms_random.value from DEMOV v)
where x in (select x from DEMO2)
order by n desc;
 
Explained.

This plan is estimated with an higher cost than the previous one and this is why it was not chosen:

SQL> select * from dbms_xplan.display(format=>'+projection');
PLAN_TABLE_OUTPUT
Plan hash value: 2921908728
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 21 | 9 (0)| 00:00:01 |
| 1 | NESTED LOOPS SEMI | | 1 | 21 | 9 (0)| 00:00:01 |
| 2 | VIEW | DEMOV | 902 | 17138 | 9 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEMO1 | 968K| 17M| 8779 (1)| 00:00:01 |
| 4 | INDEX FULL SCAN DESCENDING| DEMO1_N | 968K| | 4481 (1)| 00:00:01 |
| 5 | VIEW PUSHED PREDICATE | VW_NSO_1 | 1 | 2 | 0 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | DEMO2_X | 1 | 2 | 0 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
6 - access("X"="V"."X")
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - (#keys=0) "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 2 - "V"."N"[NUMBER,22], "V"."X"[CHARACTER,1], "V"."Y"[CHARACTER,1] 3 - "N"[NUMBER,22], "X"[CHARACTER,1], "Y"[CHARACTER,1] 4 - "DEMO1".ROWID[ROWID,10], "N"[NUMBER,22]

This cost estimation is fine. The cost of getting all rows by index access is higher than with a full table scan, but the optimizer knows that the actual cost is proportional to the number of rows fetched and then it adjusts the cost accordingly. This is fine here because the VIEW has only non-blocking operations. The problem in the first plan without the hint, was because the same arithmetic was done, without realizing that the SORT ORDER BY is a blocking operation and not a permanent sorted structure, and must be completed before being able to return the first row.

In this example, as in the real case I’ve encountered, the difference in cost is very small (7 versus 9 here) which means that the plan can be ok and switch to the bad one (full scan, call function for all rows, sort them) with a small change in statistics. Note that I mentioned that the plan was ok in 10g but that may simply be related to the PGA settings and different estimation for the cost of sorting.

 

Cet article CBO, FIRST_ROWS and VIEW misestimate est apparu en premier sur Blog dbi services.

Docker-CE on Ubuntu 17.10 (Artful Aardvark)

Dietrich Schroff - Thu, 2017-11-16 15:07
Today docker is only added to the repositories up to ubuntu version 17.04:

If you want to run docker on 17.10 you have to perform the following steps:
After that 
# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9a0669468bf7: Pull complete
Digest: sha256:cf2f6d004a59f7c18ec89df311cf0f6a1c714ec924eebcbfdd759a669b90e711
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Exporting multiple rows from a table to single row text using utl

Tom Kyte - Thu, 2017-11-16 10:06
In my project, I need export multiple rows from a single column to single row text using utl file Example below My table : Employees Employee_name ----------------------------- Smith John Tom Adam And my output text file should export like...
Categories: DBA Blogs

INSTR Function to find exact match of a value

Tom Kyte - Thu, 2017-11-16 10:06
Hi, Struggling for sometime now. I have a list read from a table column - (1,2,3,10,11) I need to check if my number is present in the above list So for 1 INSTR(1, List) return 2 which is fine. Now the issue is if the list contain (2,3,10,11)...
Categories: DBA Blogs

New Oracle Cloud Infrastructure Innovations Deliver Unmatched Performance and Value for the Most Demanding Enterprise, AI and HPC Applications

Oracle Press Releases - Thu, 2017-11-16 07:00
Press Release
New Oracle Cloud Infrastructure Innovations Deliver Unmatched Performance and Value for the Most Demanding Enterprise, AI and HPC Applications Tops AWS with 1,214% better storage performance at 88% lower cost per IO

Redwood Shores, Calif.—Nov 16, 2017

Oracle today announced the general availability of a range of new Oracle Cloud Infrastructure compute options, providing customers with unparalleled compute performance based on Oracle’s recently announced X7 hardware. Newly enhanced virtual machine (VM) and bare metal compute, and new bare metal graphical processing unit (GPU) instances enable customers to run even the most infrastructure-heavy workloads such as high-performance computing (HPC), big data, and artificial intelligence (AI) faster and more cost-effectively.

Unlike competitive offerings, Oracle Cloud Infrastructure is built to meet the unique requirements of enterprises, offering predictable performance for enterprise applications while bringing cost efficiency to HPC use cases. Oracle delivers 1,214 percent better storage performance at 88 percent lower cost per input/output operation (IO)1.

New Innovations Drive Unrivaled Performance at Scale

All of Oracle Cloud Infrastructure’s new compute instances leverage Intel’s latest Xeon processors based on the Skylake architecture. Oracle’s accelerated bare metal shapes are also powered by NVIDIA Tesla P100 GPUs, based on the Pascal architecture. Providing 28 cores, dual 25Gb network interfaces for high-bandwidth requirements and over 18 TFLOPS of single-precision performance per instance, these GPU instances accelerate computation-heavy use cases such as reservoir modeling, AI, and Deep Learning.

Oracle also plans to soon release NVIDIA Volta architecture-powered instances with 8 NVIDIA Tesla V100 GPUs interconnected via NVIDIA NVLINK to generate over 125 TFLOPS of single-precision performance. Unlike the competition, Oracle will offer these GPUs as both virtual machines and bare metal instances.  Oracle will also provide pre-configured images for fast deployment of use cases such as AI. Customers can also leverage TensorFlow or Caffe toolkits to accelerate HPC and Deep Learning use cases.

“Only Oracle Cloud Infrastructure provides the compute, storage, networking, and edge services necessary to deliver the end-to-end performance required of today’s modern enterprise,” said Kash Iftikhar, vice president of product management, Oracle. “With these latest enhancements, customers can avoid additional hardware investments on-premises and gain the agility of the cloud. Oracle Cloud Infrastructure offers them tremendous horsepower on-demand to drive competitive advantage.”

In addition, Oracle’s new VM standard shape is now available in 1, 2, 4, 8, 16, and 24 cores, while the bare metal standard shape offers 52 cores, the highest Intel Skylake-based CPU count per instance of any cloud vendor. Combined with its high-scale storage capacity, supporting up to 512 terabytes (TB) of non-volatile memory express (NVMe) solid state drive (SSD) remote block volumes, these instances are ideal for traditional enterprise applications that require predictable storage performance.

The Dense I/O shapes are also available in both VM and bare metal instances and are optimal for HPC, database applications, and big data workloads. The bare metal Dense I/O shape is capable of over 3.9 million input/output operations per second (IOPS) for write operations. It also includes 51 TB of local NVMe SSD storage, offering 237 percent more capacity than competing solutions1.

Furthermore, Oracle Cloud Infrastructure has simplified management of virtual machines by offering a Terraform provider for single-click deployment of single or multiple compute instances for clustering. In addition, a Terraform-based Kubernetes installer is available for deployment of highly available, containerized applications.

By delivering compute solutions that leverage NVIDIA’s latest technologies, Oracle can dramatically accelerate its customers’ HPC, analytics and AI workloads. “HPC, AI and advanced analytic workloads are defined by an almost insatiable hunger for compute,” said Ian Buck, general manager and vice president of Accelerated Computing at NVIDIA. “To run these compute-intensive workloads, customers require enterprise-class accelerated computing, a need Oracle is addressing by putting NVIDIA Tesla V100 GPU accelerators in the Oracle Cloud Infrastructure.”

“The integration of TidalScale's inverse hypervisor technology with Oracle Cloud Infrastructure enables organizations, for the first time, to run their largest workloads across dozens of Oracle Cloud bare metal systems as a single Software-Defined Server in a public cloud environment,” said Gary Smerdon, chief executive officer, TidalScale, Inc. “Oracle Cloud customers now have the flexibility to configure, deploy and right-size servers to fit their compute needs while paying only for what they use.”

“Cutting-edge hardware can make all the difference for companies we work with like Airbus, ARUP and Rolls Royce,” said Jamil Appa, co-founder and director of Zenotech. “We’ve seen significant improvements in performance with the X7 architecture. Oracle Cloud Infrastructure is a no-brainer for compute-intensive HPC workloads.”

1 Based on comparison to AWS i3.16XL using industry-standard CloudHarmony benchmark, a measure of storage performance across a range of workloads. For more information, see: https://blogs.oracle.com/cloud-infrastructure/high-performance-x7-compute-service-review-analysis

Contact Info
Greg Lunsford
Oracle
+1.650.506.6523
greg.lunsford@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5146
kreeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Greg Lunsford

  • +1.650.506.6523

Kristin Reeves

  • +1.415.856.5146

A response to: What makes a community?

Yann Neuhaus - Thu, 2017-11-16 03:39

A recent tweet of mine resulted in Martin Widlake to write a really great blog post about What makes a community. Please read it before you continue to read this. There was another response from Stefan Koehler which is worth mentioning as well.

Both, Martin and Stefan, speak about Oracle communities because this is were they are involved in. At the beginning of Martin’s post he writes: “Daniel was not specific about if this was a work/user group community or a wider consideration of society, …” and this was intentional. I don’t think that it really matters much if we speak about a community around a product, a community that just comes together for drinking beer and to discuss the latest football results or even if we talk about a community as a family. At least in the German translation “Gemeinschaft” applies to a family as well. This can be a very few people (mother,father,kids) or more if we include brothers, sisters, grandmas and so on. But still the same rules that Martin outlines in hist blog post apply: You’ll always have people driving the community such as organizing dinners (when we speak about families), organizing conferences (when we speak about technical communities) or organizing parties (when we talk about fun communities) or organizing whatever for whatever people make up the specific community. Then you’ll always have the people willing to help (the people Martin describes as the people who share and/or talk) and you’ll always have the people that consume/attend which is good as well, because without them you’d have nothing to share and to organize.

We at dbi services are a community as well. As we work with various products the community is not focused on a specific product (well, it is in the area of a specific product, of course) but rather on building an environment we like to work in. The community here is tight to technology but detached from a single product. We share the same methodologies, the same passion and have fun attending great parties that are organized mostly by the non technical people in our company. In this case you could say: The non-technical people are the drivers for the community of the company even if the company is very technical from its nature. And here we have the same situation again: Some organize, some attend/consume and some share, but all are required (as Martin outlined in his post as well).

Of course I have to say something about the PostgreSQL community: Because PostgreSQL is a real community project the community around it is much more important than with other technical communities. I do not say that you do not need a community for vendor controlled products because when the vendor fails to build a community around its product the product will fail as well. What I am saying is that the PostgreSQL community goes deeper as the complete product is driven by the community. Of course there are companies that hire people working for the community but they are not able to influence the direction if there is no agreement about the direction in the community. Sometimes this can make it very hard to progress and a lot of discussions need to be discussed but at the end I believe it is better to have something which the majority agrees on. In the PostgreSQL community I think there are several drivers: For sure all the developers are drivers, the people who take care of all the infrastructure (mailing lists, commitfests, …) are drivers as well. Basically everybody you can see on the mailing lists and answers questions are drivers because they keep the community active. Then we have all the people you see in other communities as well: Those who share and those who consume/attend. I think you get the point: An open source community is by its nature far more active than what you usually see for non-opensource communities for one reason: It already starts with the developers and not with a community around a final product. You can be part of such a community from the very beginning, which is writing new features and patches.

Coming back to the original question: What makes a community? Beside what Martin outlined there are several other key points:

  • The direction of the community (no matter if technical or not) must be so that people want to be part of that
  • When we speak about a community around a product: You must identify yourself with the product. When the product goes into a direction you can not support for whatever reason you’ll leave, sooner or later. The more people leave, the weaker the community
  • It must be easy to participate and to get help
  • A lot of people are willing to spend (free-) time to do stuff for the community
  • There must be a culture which respects you and everybody else
  • Maybe most important: A common goal and people that are able and willing to work together, even if this sometimes requires a lot of discussions

When you have all of these, the drivers, the people who share, and those that attend will come anyway, I believe.

 

Cet article A response to: What makes a community? est apparu en premier sur Blog dbi services.

The Oracle self-service Premier Support Investment

Chris Warticki - Wed, 2017-11-15 23:00

Nobody wants to open Service Requests.  I get it.  I don't either.  However, everybody wants to learn to get a Service Request closed faster, how to log a Severity 1 Service Request and how to escalate a Service Request.

The ultimate value of the Oracle Premier Support investment is in utilizing all of the assets in the inventory of resources to prevent Service Requests in the first place.

Nobody is being paid to create and manage Service Requests.  Who would want to do that?

Review and take advantage of everything that's already available.

  1. Ongoing Training and Education from MOS Experts and Cloud Customer Connect - to prevent Service Requests
  2. Proactive Email Alerts and Notifications  - to prevent Service Requests
  3. Customized Dashboards and Product Powerviews in MOS  - to prevent Service Requests
  4. A robust, constantly growing Knowledge Base - to prevent Service Requests
  5. Structured Information Centers of the Best of the Best Support Practices - to prevent Service Requests
  6. Dozens and Hundreds of Tools, Scripts and Diagnostics - to prevent Service Requests
  7. MOS Communities and Cloud Customer Connect Forums - to prevent Service Requests
  8. Subject Matter Expert Networks in our Blogs, Twitter, Newsletters and Events - to prevent Service Requests

.....and of course, there's the ability to create and manage Service Requests.

"I'm FOR the customer"
-Chris Warticki
Global Customer Success Management

Strangler Pattern For Linked Oracle DB - how could it be done?

Tom Kyte - Wed, 2017-11-15 15:46
Hi, I'm not a big expert with Oracle databases so please forgive my fumbling description. We're working with a system where there are two databases which are connected using a database link. A -> queries and RPCs -> B There is a need to intr...
Categories: DBA Blogs

Storing a calculated value in a variable at run time without using triggers.

Tom Kyte - Wed, 2017-11-15 15:46
Is there a way to store a calculated data without the use of triggers? With some input data, calculations will be done and a variable will store the output data. When the input data is modified, the variable will have another calculated data. Is ther...
Categories: DBA Blogs

Is Using ROWTYPE is better then fetching 75% columns values to different variables

Tom Kyte - Wed, 2017-11-15 15:46
Hi Tom, I am using multiple variables to get columns values from a table(obviously I am using INTO clause and getting one row with filter criteria). I can do this by using %ROWTYPE as well. This way it will fetch all of the columns. Now, pro...
Categories: DBA Blogs

Virtual Private Database

Tom Kyte - Wed, 2017-11-15 15:46
Hello Tom, I know you can use VPD to restrict data horizontally - that is restrict rows based on where clauses. However, is there a way to automatically hide columns from users using VPD? Let's say there is a table called MBR(MBR_ID NUMBER, LNAM...
Categories: DBA Blogs

merge the cursor output of multiple stored procedures into one result

Tom Kyte - Wed, 2017-11-15 15:46
Hello Tom, Can you please tell me how can i merge the output of 6 stored procedures into one result. The output of each stored procedure is a cursor which holds n number of records(records data structure is same). I have to merge the data and ...
Categories: DBA Blogs

need a single update query for below question

Tom Kyte - Wed, 2017-11-15 15:46
I've table locations and it contains below data : loc_id location_name 101 newyork 102 losangels 103 chicago 104 boston 105 dallas now is that possible to write a single query to update all records like below loc_id location_name 101...
Categories: DBA Blogs

How Long Can I Get Support for a Specific Java Update?

Steven Chan - Wed, 2017-11-15 15:11
Java logo

Support timelines for Oracle products can be tricky to understand.  The time that an overall product release gets updates is governed by the dates in the Oracle Lifetime Support Policies.

EBS users have 12 months to upgrade to the latest Fusion Middleware component patchsets, and 24 months to upgrade to the latest database components. These are called grace periods.

For dates of grace periods for specific Database or Fusion Middleware patchsets, see:

What are the support dates for different Java releases?

Extended Support for Java SE 6 ends on December 31, 2018. E-Business Suite customers must upgrade their servers to Java SE 7 before that date.

Premier Support for Java SE 7 runs to July 31, 2019. Extended Support for Java SE 7 runs to July 31, 2022. 

Do Java updates have grace periods?

No. Support for Java updates works differently than other Oracle products.  New bug fixes and security updates are always delivered on top of the latest Java update available at the time.

This policy applies to Java running on EBS servers, as well as JRE and Java Web Start running on end-user client desktops.

For example:

As of the date that this article was published, the latest Java SE 7 available is Update 1.7.0_161. 

If you report an issue today with an earlier Java SE 7 update such as Java 7 Update 1.7.0_10, you will be asked to apply 1.7.0_161 and attempt to reproduce the issue.

If the issue does not reproduce, then the solution will be to apply 1.7.0_161 to all of your end-user desktops.

If the issue does reproduce, then Oracle Java Support will log a bug and fix the issue on a Java release later than 1.7.0_161.

Related Articles

 

Categories: APPS Blogs

Conditional Navigation based on Queries in Oracle Visual Builder Cloud Service

Shay Shmeltzer - Wed, 2017-11-15 13:21

A couple of threads on the Oracle Visual Builder Cloud Service forum asked about writing code in buttons in VBCS that compares values entered in a page to data in business objects and perform conditional navigation based on the values. In a past blog I showed the code needed for querying VBCS objects from the UI, but another sample never hurts, so here is another demo...

For this demo I'm going to show how to do it in a login flow - assuming you have a business object that keeps usernames and passwords, and you want to develop a page where a user types a user/pass combination and you need to verify that this is indeed a valid combination that exist in the business object.

(In reality, if you want to do user authentication in VBCS - you should use the built in security frameworks and not code it this way. I'm just using this as an example.)

Here is a quick video of the working app - with pointers to the components detailed below.

The first thing you'll do is create the business object that hosts the user/pass combination - note that in the video since "user" is a reserved word - the ID for the field is actually "user_" - which is what we'll use in our code later on.

 

Next you'll want to create a new page where people can insert a user/pass combination - to do that create a new page of type "Create" - this page will require you to associate it with a business object, so create a new business object. We won't actually keep data in this new business object. In the video and the code - this business object is called "query".

Now design your page and add the user and pass fields - creating parallel fields in the query business object (quser and qpass in the video). You can then remove the "Save" button that won't be use, and instead add a "validate" button.

For this new button we'll define a new custom action that will contain custom JavaScript code. Custom code should return either a success state - using resolve(); - or failure - using reject();

Based on the success or failure you can define the next action in the flow - in our case we are showing either a success or error message:

success flow

Now lets look at the custom JavaScript code:

require(['operation/js/api/Conditions', 'operation/js/api/Operator'], function (Conditions, Operator) { var eo = Abcs.Entities().findById('Users'); var passid = eo.getProperty('pass'); var userid = eo.getProperty('user_'); var condition = Conditions.AND( Conditions.SIMPLE(passid, Operator.EQUALS,$QueryEntityDetailArchetypeRecord.getValue('qpass') ), Conditions.SIMPLE(userid, Operator.EQUALS, $QueryEntityDetailArchetypeRecord.getValue('quser')) ); var operation = Abcs.Operations().read( { entity : eo, condition : condition }); operation.perform().then(function (operationResult) { if (operationResult.isSuccess()) { operationResult.getData().forEach(function (oneRecord) { resolve("ok"); }); } reject("none"); } ). catch (function (operationResult) { if (operationResult.isFailure()) { // Insert code you want to perform if fetching of records failed alert('didnt worked'); reject("error"); } }); });

Explaining the code:

  • Lines 2-4 - getting the pointers to the business object and the fields in it using their field id.
  • Lines 5-8 - defining a condition with AND - referencing the values of the fields on the page
  • Lins 9-11 - defining the operation to read data with the condition from the business object
  • Line 12 - executing the read operation
  • Line 14-18 - checking if a record has been returned and if it has then we are ok to return success - there was a user/pass combination matching the condition.
  • Line 19 - otherwise we return with a failure.

One recommendation, while coding JavaScript - use a good code editor that will help highlight open/close brackets matches - it would save you a lot of time.

For more on the VBCS JavaScript API that you can use for accessing business components see the doc.

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator