Feed aggregator

Delete a record after it is inserted

Tom Kyte - Fri, 2017-06-30 08:06
1)i want delete the inserted record automatically after 10 minute of insertion by using trigger .Is it possible ?if not possible, give possible way how to do? Otherwise give another different way to do it? Thank You.
Categories: DBA Blogs

Get DDL table without segment attributes but with tablespace

Tom Kyte - Fri, 2017-06-30 08:06
Hi all, I would like to get DDL for a table with just tablespace info (without the remaining "Segment attributes" info). For example, <code>CREATE TABLE "SQL_ZRRMMMSKTDVROYPXNSHFKJXCB"."MY_TABLE" ( "COLUMN1" VARCHAR2(1), "COLUMN2" NUMB...
Categories: DBA Blogs

Bulk insert with unique sequence number Country wise

Tom Kyte - Fri, 2017-06-30 08:06
Hi Connor/Chris, I have a stored procedure logic written which first insert data into table using BULK LIMIT, then for sequence number we are using ROWNUM to get the uniform sequence : n_card_rowcount := 0; LOOP ...
Categories: DBA Blogs

Activities after Weekly Data Inserts on Partition Table

Tom Kyte - Fri, 2017-06-30 08:06
Hi, We have Archival DB which holds historical application data. Weekly basis data from Primary DB gets copied to Archival DB Partition tables. Do I need to perform any activities post data movement in Partition table like Gather Table/Index S...
Categories: DBA Blogs

how to disable auto login wallet

Tom Kyte - Fri, 2017-06-30 08:06
Hi, In a RAC database, I have a open keystore (wallet) with AUTOLOGIN. I would like to disable the AUTO LOGIN, but I am not sure of the correct procedure for that. select * from gv$encryption_wallet; WRL_TYPE WRL_PARAMETER ...
Categories: DBA Blogs

ORD_IMAGE vs ORDImage

Tom Kyte - Fri, 2017-06-30 08:06
Hi Oracle Team, After reading through the docs, I still cannot figure out how the two are different: 1. Multimedia ORD_IMAGE PL/SQL Package (ORD_IMAGE with an underscore in the name) 2. Multimedia ORDImage Object Type (ORDImage) Functionally...
Categories: DBA Blogs

PeopleSoft and Adaptive Query Optimization in Oracle 12c

David Kurtz - Fri, 2017-06-30 07:44
Adaptive Query Optimization is a significant feature in Oracle 12c. Oracle has made lots of information available on the subject.(See https://blogs.oracle.com/optimizer/oracle-database-12c-is-here).
Adaptive Query Optimization is a set of capabilities that enable the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics…There are two distinct aspects in Adaptive Query Optimization: adaptive plans, which focuses on improving the execution of a query and adaptive statistics, which uses additional information to improve query execution plans.from Optimizer with Oracle Database 12c Release 2, White Paper June 2017 (see http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-with-oracledb-12c-1963236.pdf)Adaptive Statistics include:
  • SQL plan directives: created and used to adapt SQL execution plans.
  • Statistics feedback: cardinality from table joins is used to improve SQL execution plans.
  • Performance feedback: improves the degree of parallelism chosen when PARALLEL_DEGREE_POLICY is set to ADAPTIVE
  • Adaptive dynamic sampling for parallel execution: dynamic statistics are gathered for certain parallel queries to improve cardinality estimates. The sample size is determined automatically.
(Parallelism is not used by default within PeopleSoft because all objects are marked NOPARALLEL by Application Designer).
This additional information should help the optimizer make better decisions, but it comes at the price of making the database do more work during SQL parse. Unfortunately, PeopleSoft makes extensive use of dynamically generated SQL, often with literal values leading to large amounts of parse. Even a small additional overhead during SQL parse can result in a significant overhead for the entire system. Adaptive Plans include:
  • Nested loop join/Hash join selection: the optimizer chooses between nested loops or hash joins at query runtime.
  • Adaptive parallel distribution method: the parallel distribution method is determined at runtime.
  • Star transformation bitmap pruning: certain bitmap indexes may be removed from the SQL execution plan at runtime if selectivity is significantly poorer than the estimate.
When joining tables, making the best choice between nested loops and hash joins is often critical for good performance. This feature could be of significant benefit to many applications, not just PeopleSoft. Let's take, as an example, this SQL generated by nVision on a 12c release 1 database. Due to the literal values it is very unlikely that the statement will be executed a second time.
SELECT SUM(A.POSTED_TOTAL_AMT) FROM sysadm.PS_LEDGER A WHERE A.LEDGER ='XXXXXXXXXX' AND A.FISCAL_YEAR=2015 AND (A.ACCOUNTING_PERIOD 
BETWEEN 1 AND 12 OR A.ACCOUNTING_PERIOD BETWEEN 998 AND 999) AND A.ACCOUNT IN ('40000001','40000002','40000003','40000004') AND
(A.DEPTID ='001A' OR A.DEPTID ='002A' OR A.DEPTID ='003A' OR A.DEPTID ='004A' OR A.DEPTID ='005B' OR A.DEPTID ='006A' OR A.DEPTID
='007A') AND A.CURRENCY_CD ='USD' AND A.STATISTICS_CODE=' '
This recursive SQL was generated by dynamic statistics processing during SQL parse. Such SQL is easily identifiable by the /* DS_SVC */ comment.
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
optimizer_features_enable(default) no_parallel result_cache(snapshot=3600) */
SUM(C1)
FROM (
SELECT /*+ qb_name("innerQuery") NO_INDEX_FFS( "A") */ 1 AS C1
FROM "SYSADM"."PS_LEDGER" SAMPLE BLOCK(82.1847, 8) SEED(12) "A"
WHERE ("A"."FISCAL_YEAR"=2015) AND ("A"."CURRENCY_CD"='USD') AND
("A"."ACCOUNT"='40000001' OR "A"."ACCOUNT"='40000002' OR
"A"."ACCOUNT"='40000003' OR "A"."ACCOUNT"='40000004') AND
("A"."ACCOUNTING_PERIOD"<=170 AND "A"."ACCOUNTING_PERIOD">=1 OR
"A"."ACCOUNTING_PERIOD">=901 AND "A"."ACCOUNTING_PERIOD"<=905) AND
("A"."DEPTID"='001A' OR "A"."DEPTID"='002A' OR "A"."DEPTID"='003A' OR
"A"."DEPTID"='004A' OR "A"."DEPTID"='005B' OR "A"."DEPTID"='006A' OR
"A"."DEPTID"='007A') AND ("A"."LEDGER"='XXXXXXXXXX') AND
("A"."STATISTICS_CODE"=' ')) innerQuery
It is easy to see that you wouldn't need too many additional queries like this to have a significant on system performance. In the early days of Oracle 12c many PeopleSoft customers reported just such experience. It is no surprise therefore that Oracle recommend disabling adaptive optimization in 12c. 
optimizer_adaptive_features = FALSE - After upgrading to Oracle Database 12c, many PeopleSoft customers have noticed overall performance degradation which is related to Oracle 12c Optimizer Adaptive Feature (OAF). It is recommended that this value be disabled in all Oracle Databases running PeopleSoft Applications. from E-ORA Advice for the PeopleSoft Oracle DBA (Doc ID 1445965.1) (see https://support.oracle.com/epmos/faces/DocumentDisplay?id=1445965.1)In Oracle 12c release 1, the single parameter OPTIMIZER_ADAPTIVE_FEATUES controls all the adaptive optimization features. However, in Oracle 12c release 2 this is replaced by 2 new parameters:
Significantly this enhancement is available in 12c release 1. See
There are two patches available
  • patch for bug# 22652097 introduces the two parameters OPTIMIZER_ADAPTIVE_PLANS and OPTIMIZER_ADAPTIVE_STATISTICS and removes the parameter OPTIMIZER_ADAPTIVE_FEATURES.
  • patch for bug# 21171382 disables the automatic creation of extended statistics unless the optimizer preference AUTO_STATS_EXTENSIONS is set to ON.
Here is an example of another query from nVision, this time on a 12.1 system where the two patches have been applied. The optimizer has switched from a nested loop to a hash join in an adaptive plan. In this particular case, this was a good decision and the hash join outperformed the nested loop.
SELECT A.AFFILIATE,SUM(A.POSTED_TOTAL_AMT) FROM PS_XX_XXXXXXX_VW A,
PSTREESELECT10 L1 WHERE A.LEDGER='XXXXXXX' AND A.FISCAL_YEAR=2016 AND
A.ACCOUNTING_PERIOD BETWEEN 0 AND 12 AND L1.SELECTOR_NUM=63382 AND
A.ACCOUNT=L1.RANGE_FROM_10 AND A.BUSINESS_UNIT='XXXXXX' AND
A.CURRENCY_CD='USD' AND A.STATISTICS_CODE=' ' GROUP BY A.AFFILIATE

Plan hash value: 4041461164

--------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 86 (100)| | | |
| 1 | HASH GROUP BY | | 1 | 67 | 86 (3)| 00:00:01 | | |
| 2 | HASH JOIN | | 9 | 603 | 85 (2)| 00:00:01 | | |
|- 3 | NESTED LOOPS | | 9 | 603 | 85 (2)| 00:00:01 | | |
|- 4 | STATISTICS COLLECTOR | | | | | | | |
| 5 | PARTITION RANGE ITERATOR | | 9 | 459 | 84 (2)| 00:00:01 | 16 | 28 |
| 6 | INLIST ITERATOR | | | | | | | |
| 7 | PARTITION LIST ITERATOR | | 9 | 459 | 84 (2)| 00:00:01 |KEY(I) |KEY(I) |
| 8 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_LEDGER | 9 | 459 | 84 (2)| 00:00:01 | KEY | KEY |
| 9 | INDEX RANGE SCAN | PSCLEDGER | 117 | | 52 (2)| 00:00:01 | KEY | KEY |
|- 10 | PARTITION RANGE SINGLE | | 1 | 16 | 1 (0)| 00:00:01 | 63382 | 63382 |
|- 11 | INDEX RANGE SCAN | PSAPSTREESELECT10 | 1 | 16 | 1 (0)| 00:00:01 | 63382 | 63382 |
| 12 | PARTITION RANGE SINGLE | | 23 | 368 | 1 (0)| 00:00:01 | 63382 | 63382 |
| 13 | INDEX RANGE SCAN | PSAPSTREESELECT10 | 23 | 368 | 1 (0)| 00:00:01 | 63382 | 63382 |
--------------------------------------------------------------------------------------------------------------------------------------

Note
-----
- this is an adaptive plan (rows marked '-' are inactive)
Conclusion Adaptive Optimization is a great feature that does deliver real benefits, but like many Oracle features, it works best on an application that has been developed how Oracle would recommend applications be developed. Since the default value for OPTIMIZER_ADAPTIVE_STATISTICS is FALSE, PeopleSoft is clearly not alone in not being such an application.
Given PeopleSoft's propensity to generate non-sharable SQL, adaptive statistics are not a viable option, and should not be enabled, and in 12c release 1 that has meant sacrificing the benefits of adaptive plans. However, patch 22652097 means that we don't have to wait for release 2, we can now choose which feature to use.
Every system is different, even every PeopleSoft system is different. However, my experience of this so far with PeopleSoft is that I have not seen Oracle switch join method cause a problem. It certainly isn't a panacea either. When the join method has changed, it hasn't always made a big difference, but it has been net beneficial. I would certainly recommend applying these patches.

Video: How Do You Get from a Tandy 1000 to the Moon?

OTN TechBlog - Fri, 2017-06-30 07:00

During my first interview with Sean Phillips, recorded at Oracle Code in Washington DC, he went into technical detail about the role JavaFX played in his work for NASA on an application that plots space flight trajectories. Cool gig, right? So when I had another chance to talk to Sean, this time at last week's  Oracle Code event in Atlanta, I wanted to focus on his career, tracking his own trajectory from his first experiments with a Tandy 1000 computer to his current work helping spacecraft to get to the farthest reaches of our solar system. That's what you'll get in this video. After a brief recap of his Oracle Code Atlanta keynote session, Sean describes when he first caught the software development bug, and brings us up to the present and the NASA projects in which he is involved. Watch the video!

Additional Resources

after 27+ years in Oracle land I am forced to patch sqlplus

Amis Blog - Fri, 2017-06-30 06:25

When starting to use sqlplus 12.2 I noticed that my SQL prompt was not changing to what login.sql told it to be. This did not happen in sqlplus 12.1 or lower versions.
Maybe this is a bug, maybe a new feature I thought. Behaviour in 12.2 of sqlplus indeed has changed according to documentation: sqlplus 12.2 no longer looks in local directory (i.e. where you start sqlplus) for a login.sql file to run but only looks for .sql files in directories indicated by environment variables (SQLPATH for Windows and ORACLE_PATH on Linux). However, even when setting these environment variables to the proper values sqlplus still did not run my login.sql automatically. Ok, then I’ll create an SR with Oracle Support. They confirmed that this odd behaviour indeed is a bug and that a patch is available for sqlplus: PATCH:25804573. So now finally I have a reason to patch sqlplus!

The post after 27+ years in Oracle land I am forced to patch sqlplus appeared first on AMIS Oracle and Java Blog.

Analytics on the edge?

DBMS2 - Fri, 2017-06-30 03:27

There’s a theory going around to the effect that:

  • Compute power is and will be everywhere, for example in cars, robots, medical devices or microwave ovens. Let’s refer to these platforms collectively as “real-world appliances”.
  • Much more data will be created on these platforms than can reasonably be sent back to centralized/cloudy servers.
  • Therefore, cloud-centric architectures will soon be obsolete, perhaps before they’re ever dominant in the first place.

There’s enough truth to all that to make it worth discussing. But the strong forms of the claims seem overblown.

1. This story doesn’t even make sense except for certain new classes of application. Traditional business applications run all over the world, in dedicated or SaaSy modes as the case may be. E-commerce is huge. So is content delivery. Architectures for all those things will continue to evolve, but what we have now basically works.

2. When it comes to real-world appliances, this story is partially accurate. An automobile is a rolling network of custom Linux systems, each running hand-crafted real-time apps, a few of which also have minor requirements for remote connectivity. That’s OK as far as it goes, but there could be better support for real-time operational analytics. If something as flexible as Spark were capable of unattended operation, I think many engineers of real-world appliances would find great ways to use it.

3. There’s a case to be made for something better yet. I think the argument is premature, but it’s worth at least a little consideration. 

There are any number of situations in which decisions are made on or about remote systems, based on models or rules that should be improved over time. For example, such decisions might be made in:

  • Machine vision or other “recognition”-oriented areas of AI.
  • Detection or prediction of malfunctions.
  • Choices as to what data is significant enough to ship back upstream.

In the canonical case, we might envision a system in which:

  • Huge amounts of data are collected and are used to make real-time decisions.
  • The models are trained centrally, and updated remotely over time as they are improved.
  • The remote systems can only ship back selected or aggregated data to help train the models.

This all seems like an awkward fit for any common computing architecture I can think of.

But it’s hard to pin down important examples of that “canonical” case. The story implicitly assumes:

  • A model is widely deployed.
  • The model does a decent job but not a perfect one.
  • Based on its successes and failures, the model gets improved.

And now we’re begging a huge question: What exactly is there that keeps score as to when the model succeeds and fails? Mathematically speaking, I can’t imagine what a general answer would be like.

4. So when it comes to predictive models executed on real-world appliances I think that analytic workflows will:

  • Differ for different (categories) of applications.
  • Rely in most cases on simple patterns of data movement, such as:
    • Stream everything to central servers and sort it out there, or if that’s not workable …
    • … instrument a limited number of test nodes to store everything, and recover the data in batch for analysis.
    • Update models only in timeframes that you’re doing a full app update/refresh.

And with that much of the apparent need for fancy distributed analytic architectures evaporates.

5. Finally, and notwithstanding the previous point: Across many use cases, there’s some kind of remote log data being shipped back to a central location. It may be the complete log. It may be periodic aggregates. It may happen only what the edge nodes regard as significant events. But something is getting shipped home.

The architectures for shipping, receiving and analyzing such data are in many cases immature. That’s obvious if there’s any kind of streaming involved, or if analysis is done in Spark. Ditto if there’s anything we might call “non-tabular business intelligence”. As this stuff matures, it will in many cases fit very well with today’s cloud thinking. But in any case — it needs to mature.

Truth be told, even the relational case is immature, in that it can easily rely on what I called:

data warehouses (perhaps really data marts) that are updated in human real-time

That quote is from a recent post about Kudu, which:

  • Is designed for exactly that use case.
  • Went GA early this year.

As always, technology is in flux.

Related links

Categories: Other

Learning New Things : Networking and Community

Tim Hall - Fri, 2017-06-30 03:04

I’ve mentioned networking and community a number of times over the years. Most recently in a post called Community and the Hive Mind, where I talk about the impact of community on learning.

I think it’s important that you become as self sufficient as possible, but that doesn’t mean you should ignore what’s going on around you. Why would I bang my head against a brick wall for days if I can reach out on Twitter and get a response in minutes from someone who is better than me? As I said in that linked post

“This is not about expecting help. This is not about trying to offload my responsibilities onto others. It’s mostly about getting a second opinion and the odd bit of advice. More importantly, the other people in the community need to know that you’ve got their back too. This has to be a two-way street!”

Networking and getting involved in the community is a great way to increase your access to information, which makes learning much easier.

From a selfish perspective, the more you contribute, the more opportunities seem to come your way. It kind-of feels like I’m living a charmed life at times, but all these opportunities come from being actively involved and putting yourself out there. To quote Samuel Goldwyn,

“The harder I work, the luckier I get.”

If you want to start getting involved in the community you might want to consider reading my posts on writing tips and public speaking.

Check out the rest of this series here.

Cheers

Tim…

Learning New Things : Networking and Community was first posted on June 30, 2017 at 9:04 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Usability, Product Management, and LinkedIn - a rant

Abhinav Agarwal - Fri, 2017-06-30 02:09
L
inkedIn began as a professional networking site, has evolved into a social media behemoth, and has yet managed to maintain and sharpen its focus on the professional space. That may, in part, explain why, in 2016, Microsoft chose to put down more than $26 billion Washingtons to buy LinkedIn.
While both LinkedIn's web site and mobile app have undergone substantial changes over the years, and is a far cry from the spartan look both sported just a few years ago, I wanted to call out one peculiarity - call it eccentricity - that the site has. I would call it a glaring UX and product management miss, if you will.
Let me elaborate.
email from LinkedIn in June 2014, announcing the launch of the publish feature.
Sometime in April 2014, LinkedIn introduced a feature that allowed users - by invitation at first, and everyone later - to publish their articles on LinkedIn. This feature is now a great source of user-generated content for LinkedIn, helping drive more traffic to its website. I have written a few over the last couple of years, and it's a great way to my thoughts on relevant topics in front of a relevant audience.
But Where Are My Articles?From the LinkedIn home page, try finding a way to navigate to your articles - published or in draft mode. Go ahead, I will wait while you wander on the home page.
You can't.
Let me show. See the screenshot below. That is the home page I see when I go to LinkedIn.
  1. The menu at the top contains no links to go to my articles.
  2. I can click the 'Write an article' button and it will take me to the LinkedIn Publishing page, and I can start penning pristine prose there.
  3. I can click the headline and view analytics on my articles or shares.

But I still cannot view a list of my articles. I can't.

  • If I go to the Publishing page, and if I click the 'More' dropdown, then voila, I can see that I have finally found what I was looking for. So will you too.
Why? Why make it so darn tough to find your own articles?
  • By design? Unlikely.
  • Oversight? Likely. A miss, from both product management and UX. Why is an important features such as this so difficult to find? It is not even available from the home page. Why is not anyone talking about discoverability? What about the scent of information? Nielsen, Cooper, Pirolli, anyone?
Solution? Fix it. Fast.



[this post first appeared in LinkedIn on June 29th, 2017]

© 2017, Abhinav Agarwal. All rights reserved.

Read Oracle Database 12.2 New Features Manual

Bobby Durrett's DBA Blog - Thu, 2017-06-29 16:53

I just finished reading the Oracle database 12.2 new features manual. I have postponed looking at 12.2 until now because for a long time 12.2 was not available for download even though it was available in Oracle’s cloud. Once the download became available I installed 12.2 in a test virtual machine but did not get any further than that. But, the first quarterly update of 12.2 is supposed to come out soon so I thought that I would at least read about the new features to start learning about it.

I wrote earlier about reading the Snowflake documentation. Reading the Oracle new features guide made me wonder about cases where both products are working on the same goals. Oracle 12.2 seems to have some new features for data analysis or analytics that relate to approximate results. Maybe these features are similar to Snowflake’s cardinality approximation routines? There are also a lot of new features related to JSON, which is a key Snowflake reality.

There are In-Memory database enhancements which is no surprise since this is a newer feature. Similarly, there are a ton of CDB and PDB database feature enhancements since multitenant was a major new feature in 12.1. Of course multitenant connects to Oracle’s cloud strategy so it is no surprise that Oracle included cloud enabling features in 12.2.

There are a ton of changes to Oracle features that I don’t use like Data Guard. I’ve seen some GGS but never used Data Guard. I recently saw a presentation at a user group meeting, I think it was at AZORA, where a presenter discussed the new 12c features of Data Guard. That user meeting plus the 12.2 new features manual together make me think that Oracle Data Guard has a massive amount of features that I am not aware of and not using.

I love the way Oracle keeps expanding their online operations. I think that 12.2 has a lot of features that would really help data warehouses. I’m also really excited about 12.2’s improvements in cross-platform migration. I have a couple of large databases that I need to migrate to Linux from HP-UX. 12.2 may have features that will help with these migrations. I’m really excited about the Data Pump parallel import of Metadata. It looks like our cross-platform database moves would need to load tens of thousands of objects. It would be awesome to do that metadata load in parallel. There are also a lot of enhancements related to Exadata. It would be nice to run a data warehouse on Exadata on 12.2.

Oracle’s sharding features make me think of Oracle trying to compete with other databases including cloud and open source systems. Oracle has everything but the kitchen sink all under one database. But, maybe less expensive databases with a subset of the features that Oracle has will work well in particular situations. It’s hard to know. I’ve been working with Oracle for 20 plus years so I think it helps to use a tool that I’m familiar with and there is no doubt that Oracle has many great features. But I wonder if Oracle is so expensive for some use cases that it makes sense to use less expensive and less capable software that fits specific needs.

I’m sorry if this post is kind of stream of consciousness writing. I reviewed the parts of the manual where I underlined something or made notes. I was pretty negative about Oracle 12.2 when Oracle announced that it would only be available in the cloud. I had a free cloud account that I could have used to mess with 12.2 but I knew that we were not going to use it until I could download it. The fact that 12.2 is available for download and that Oracle is starting to release quarterly patch updates motivated me to start looking at it. I have only just begun. I only read the new features manual. But, I am a lot more positive about 12.2 than before. It would be great if the cross-platform features were all that I hope that they are. That would help meet a real need. Anyway, I enjoyed reading about the new features in 12.2. It makes me want to get going with it to see if it can really help us.

Bobby

Categories: DBA Blogs

Parameterizing Jmeter for testing APEX

Kris Rice - Thu, 2017-06-29 14:56
A while ago we needed to stress a system by using the APEX Brookstrut demo application.  The obvious choice for this was Jmeter.  How to setup Jmeter to record web traffic by becoming a web proxy is very known and well written process.  Anyone that hasn't seen it, check this PDF and you can see how easy it is. There were a couple issues to get around. First importing the application again and

Error while creating Materialized view : ORA-14071 : invalid option for an index used to enforce a constraint

Tom Kyte - Thu, 2017-06-29 13:46
Hi Tom, Please help me with the following. I'm trying to create a materialized view with following syntax : <code>CREATE MATERIALIZED VIEW TEST_ON_LOCAL_MV TABLESPACE "USERS" PCTFREE 10 PCTUSED 40 INITRANS 2 STORAGE ( INITIAL...
Categories: DBA Blogs

How can i grant execute to a procedure inside a package?

Tom Kyte - Thu, 2017-06-29 13:46
Hello everyone. I'm trying to do this. I have a package where are declared 2 procedure <code>CREATE OR REPLACE PACKAGE PACK_CT AS PROCEDURE A; PROCEDURE B; END PACK_CT; /</code> And I have two users, let's call it User1 and User2....
Categories: DBA Blogs

ORA-00020: maximum number of processes exceeded

Tom Kyte - Thu, 2017-06-29 13:46
Hi We have an APEX 5.0.3.00.03 app running in Oracle 12c (12.1.0.2.0 64bit) Recently the system crashed with ORA-00020: maximum number of processes (300) exceeded. DBAs restarted the database setting the value to 600 On restart I logged onto th...
Categories: DBA Blogs

Background process of View?

Tom Kyte - Thu, 2017-06-29 13:46
While we retrieving data from either view or table? What will happend in the background? please explain
Categories: DBA Blogs

How to assign one ref cursor to another ref cursor and return the same to calling procedure

Tom Kyte - Thu, 2017-06-29 13:46
Hello Team, Firstly Thank You all for your efforts and quick responses. ---------------------------------------------------------------------------- I have one procedure "abc" which return sys_refcursor fetching data from another database us...
Categories: DBA Blogs

variable pivot XML to readable text in a select

Tom Kyte - Thu, 2017-06-29 13:46
I have the following query SELECT * FROM (SELECT 1 Cnt, TO_NUMBER (TO_CHAR (A.Hiredate, 'YYYY')) Yr FROM Emp A) PIVOT xml (SUM (Cnt) FOR Yr IN (any)) However I want to convert the XML which can have a variable ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator