Feed aggregator

OpenJDK 9: JShell - an interactive java interpreter shell | builtin commands

Dietrich Schroff - Wed, 2017-07-12 14:33
One of the new features of java 9 is jshell (JEP 222).

On my ubuntu system the installation was quite easy:
# apt-get install openjdk-9-jdk-headlessand you can find
$ ls /usr/lib/jvm/java-9-openjdk-amd64/bin/
idlj       jcmd    jmap        jstatd       schemagen
jar        jdb     jmod        keytool      serialver
jarsigner  jdeps   jps         orbd         servertool
java       jhsdb   jrunscript  pack200      tnameserv
javac      jimage  jsadebugd   policytool   unpack200
javadoc    jinfo   jshell      rmic         wsgen
javah      jjs     jstack      rmid         wsimport
javap      jlink   jstat       rmiregistry  xjc
 (in the third column, sixth row: jshell)

After a startup jshell comes up with this prompt:

$ /usr/lib/jvm/java-9-openjdk-amd64/bin/jshell
|  Welcome to JShell -- Version 9-internal
|  For an introduction type: /help intro


->
 The most important command is
/exitto leave the jshell (Strg-C works also, but i think /exit should be used).

There is no syntax highlighting but this does not matter.

The following builtin commands are allowed:
-> /help
|  Type a Java language expression, statement, or declaration.
|  Or type one of the following commands:

|     /list [all|start|]                       -- list the source you have typed
|     /edit                                    -- edit a source entry referenced by name or id
|     /drop                                    -- delete a source entry referenced by name or id
|     /save [all|history|start]                      -- Save snippet source to a file.
|     /open                                          -- open a file as source input
|     /vars                                                -- list the declared variables and their values
|     /methods                                             -- list the declared methods and their signatures
|     /classes                                             -- list the declared classes
|     /imports                                             -- list the imported items
|     /exit                                                -- exit jshell
|     /reset                                               -- reset jshell
|     /reload [restore] [quiet]                            -- reset and replay relevant history -- current or previous (restore)
|     /classpath                                     -- add a path to the classpath
|     /history                                             -- history of what you have typed
|     /help [|]                          -- get information about jshell
|     /set editor|start|feedback|newmode|prompt|format ... -- set jshell configuration information
|     /? [|]                             -- get information about jshell
|     /!                                                   -- re-run last snippet
|     /                                                -- re-run snippet by id
|     /-                                                -- re-run n-th previous snippet

|  For more information type '/help' followed by the name of command or a subject.
|  For example '/help /list' or '/help intro'.  Subjects:

|     intro     -- an introduction to the jshell tool
|     shortcuts -- a description of shortcuts
With /list the source code, which you provided, is shown:
-> /list 5

   5 : class MyClass {
       private int a;
       public MyClass(){a=0;}
       int getA() {return a;};
       void setA(int var) {a=var; return;}
       }

Everytime you create an Object, you will see the following:
-> ZZ = new MyClass();
|  Variable ZZ has been assigned the value MyClass@28d25987

-> ZZ.getA();
|  Expression value is: 0
|    assigned to temporary variable $8 of type int

-> ZZ.setA(200);

-> ZZ.getA();
|  Expression value is: 200
|    assigned to temporary variable $10 of type int With /vars the variables are shown:
-> /vars
|    MyClass ZZ = MyClass@28d25987
|    int $8 = 0
|    int $10 = 200
Listing the classes (ok it is getting boring):
-> /classes
|    class MyClass
 and last but not least /methods:
-> /methods
|    printf (String,Object...)void
|    getA ()int

sqloader load multiple file into 1 table

Tom Kyte - Wed, 2017-07-12 13:26
Hi Tom, I have multiple csv-files in a directory. This directory will be updated with new csv-files. I need to load all csv files with sqloader in 1 table. So all the files have the same columns only different data. This is how my control file...
Categories: DBA Blogs

Create Type by using %type columns

Tom Kyte - Wed, 2017-07-12 13:26
Hi TOM, I want to write an extract utility, which will get data from selected columns of multiple tables so planning to use pipeline function which will return a ORACLE TYPE. To create type, I would like give reference of column type from source...
Categories: DBA Blogs

Difference between stale object result from *_tab_statistics and gather_schema_stat with "LIST STALE"

Tom Kyte - Wed, 2017-07-12 13:26
I am trying to find all stale objects. As I understand there are two ways and both should return same result. Before starting I first did a flush monitoring <code> begin dbms_stats.flush_database_monitoring_info; end; / </...
Categories: DBA Blogs

Pivot with total

Tom Kyte - Wed, 2017-07-12 13:26
<code>create table ticket1 (ticketid number, tcktname varchar2(10), status varchar2(10) ); INSERT INTO ticket1 VALUES (101,'bug','open'); INSERT INTO ticket1 VALUES (102,'bug','close'); INSERT ...
Categories: DBA Blogs

Summarizing data over time - by time interval

Tom Kyte - Wed, 2017-07-12 13:26
Hello I have an application that gathers and stores data over time. Because of the applications reliance on the network and other functions the data is gathered at irregular intervals. example table TimeStamp Object Value --------- ...
Categories: DBA Blogs

SQL over PL/SQL

Tom Kyte - Wed, 2017-07-12 13:26
Hi Team, Could you please have a look at below scenario: I have 3 tables: select * from tab_login_details; select * from tab_request; select * from tab_access; Basically i need output as below: FK_TB_LOGIN_MASTER FK_TB_COMPANY_DETAILS FL...
Categories: DBA Blogs

1000 Column Limit populating a collection (ORA-00939)

Tom Kyte - Wed, 2017-07-12 13:26
Hi, I have a need to work with a collection of composite data type with more than 1000 columns in it. Here is the sample code below for collection with composite data type of 2 columns. <code> CREATE OR REPLACE TYPE obj_typ1 AS OBJECT (col...
Categories: DBA Blogs

KPI Reporting and Dashboards for Project Management

Nilesh Jethwa - Wed, 2017-07-12 12:41

KPIs enable managers to easily get an insight on how well a project has been handled. Although results are usually taken into account, the use of performance indicators considers all aspects of project management, which often makes KPI reporting more comprehensive as opposed to simple data analysis.

At the end of the day, KPIs provide a way for managers to improve how they handle projects and supervise their own team members. Its contribution to any organization makes it an important business strategy. Here’s a look at what KPI is, what a project management dashboard is, and how the automation of the two can improve project management efforts.

What is a KPI?

A KPI, or a key performance indicator, is a set of numerical representation of an organization’s success as a business. It is also used to measure how a business activity in which an organization has participated in has performed.

To allow businesses to evaluate their performance and success, indicators are defined for each area, division, branch, section, or department of a company. These indicators are then used to create a report that would enumerate in detail the quality of work and results that each section or department has produced.

Choosing the right indicators readily determines the level of improvement that an organization has to go through. This means that wherever a company needs to improve on, it has to be of utmost importance to be included as an indicator in a KPI report.

 

Read more at http://www.infocaptor.com/dashboard/kpi-reporting-and-dashboards-for-project-management

Checking Out Chatbots

Floyd Teter - Wed, 2017-07-12 12:38
I recently spent a day with the Oracle Applications UX team in a Conversational UI for Enterprise SaaS workshop.  Let’s be clear…in the context of this workshop, “Conversational UI” is a spiffy term meaning chatbots for enterprise applications.  Amazing workshop put on by the UX team.  Made even better by including a mix of attendees ranging from neophytes like me up to experienced experts sharing their tips and tricks.

I learned quite a bit about what Oracle is currently doing with chatbots, how to design a chatbot and how to build a chatbot.  I was inspired to the point of staying up all night to build a chatbot from scratch.  Hey, I’m on the road with little to do at night other than stare at hotel room walls, so what are ya gonna do if you don’t geek out?  Some takeaways from the workshop and my own research on chatbots:
  • Messaging apps are growing at insane rates.  For example, consider Facebook Messenger.  It’s used by over 1 billion people every month and is outpacing the growth of Facebook itself.
  • Getting things done with a bot is much faster than working through a website or mobile app.  While websites and mobile apps have to be loaded and navigated, bots load instantly…and people will consistently choose the path that loads the quickest.
  • Bots win on the ease of use front as well.  No navigation needed with a bot…just start the conversation. And…this is key…language is the interface people understand best, and it’s the interface used by a bot.
  • We’re at the very beginning of developing and applying bots.  But think of the potential:  would you rather navigate Amazon’s website looking for red posthole plugs, or simply ask for red posthole plugs and have them shown to you?  There are millions of simply tasks better performed by bots than through a website or mobile app…which could mean that bots will greatly reduce our current reliance on websites and mobile apps.  You can get more of an idea here 
  • Learning to code a chatbot is easy.  Lots of choices for coding languages, along with drag-n-drop IDEs.  I think it took me about 15 minutes to pick it up.
  • While the coding is easy, the underlying logic is not.  Lots of variables in regards to different terms that mean the same thing, providing easy exits to users who find the bot frustrating or unusable for their particular purpose.  Even very simple tasks require some substantial brain power for laying out the logic involved in completing a task through a conversational interface.  We’re talking about some complex decision trees.
  • To keep your underlying logic simple and to keep your users on track, build chatbots for very basic and focused tasks:  open a service request, buy a pair of shoes, add a dependent to an employee’s benefits plan…that kind of thing.
I came away from the workshop with a pretty substantial list of use cases for chatbots in the enterprise.  I’ll just share one example here.  

In the Oracle HCM Cloud Center of Excellence team, we have a group of experts who help strategic customers with technical issues.  We take in requests for their help through the COE Request Applications, which is a service request database created in Oracle APEX.  I’m thinking that a chatbot for creating requests would be a better user experience than the current data entry form that requestors must complete for each request.  So I spent part of yesterday’s workshop creating a low fidelity wireframe for such a chatbot, which I’m sharing here (note the phrase “low fidelity” - no taking potshots at the appearance of a wireframe I put together in about 20 minutes):




The upshot is that I walked away from the workshop pretty excited about chatbots.  And I’m hoping that my takeaways might whet your appetite.  If you want to know more, you can look here.

So I'm admittedly a neophyte beginning my exploration of chatbots.  What about you?  Any experiences?  Feedback?  Thoughts?  Show some comment love and share with the rest of us.

Can I do it with PostgreSQL? – 15 – invisible indexes

Yann Neuhaus - Wed, 2017-07-12 11:26

It has been quite a while since the last post in this series. Today we’ll look at what you know from Oracle as: Invisible indexes. In case you wonder what they might be useful for: Imagine you want to test if an index would benefit one or more queries without affecting the production workload. In other words: Wouldn’t it be cool to create an index but somehow tell the optimizer not to use it for the ongoing queries? This is what invisible indexes are about: Create an index which you believe should improve performance for one or more queries but at the same step make sure that it is not taken into account when the query plan is generated and then executed. The bad news is: This is not possible in PostgreSQL core. The good news is: There is an extension which does exactly this.

The extension is called hypopg and is available via github. The readme states that it works on all PostgreSQL versions starting with 9.2, so lets try it with PostgreSQL 10 Beta1.

postgres@pgbox:/home/postgres/ [PG10B] psql -X postgres
psql (10beta1 dbi services build)
Type "help" for help.

postgres=# select version();
                                                            version                                                            
-------------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 10beta1 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
(1 row)

postgres=# 

Getting the extension downloaded, compiled and installed is straight forward:

postgres@pgbox:/home/postgres/ [PG10B] wget https://github.com/dalibo/hypopg/archive/master.zip
postgres@pgbox:/home/postgres/ [PG10B] unzip master.zip 
postgres@pgbox:/home/postgres/ [PG10B] cd hypopg-master/
postgres@pgbox:/home/postgres/hypopg-master/ [PG10B] make install
postgres@pgbox:/home/postgres/hypopg-master/ [PG10B] psql -X -c "create extension hypopg" postgres
CREATE EXTENSION
postgres@pgbox:/home/postgres/hypopg-master/ [PG10B] psql -X -c "\dx" postgres
                     List of installed extensions
  Name   | Version  |   Schema   |             Description             
---------+----------+------------+-------------------------------------
 hypopg  | 1.1.0dev | public     | Hypothetical indexes for PostgreSQL
 plpgsql | 1.0      | pg_catalog | PL/pgSQL procedural language
(2 rows)

Here we go, all fine until now and we should be ready to use it. Obviously we need a table and some data to test with:

postgres@pgbox:/home/postgres/ [PG10B] psql -X postgres
psql (10beta1 dbi services build)
Type "help" for help.

postgres=# \! cat a.sql
drop table if exists t1;
create table t1 ( a int );
with generator as 
 ( select a.*
     from generate_series ( 1, 5000000 ) a
    order by random()
 )
insert into t1 ( a ) 
     select a
       from generator;
postgres=# \i a.sql
DROP TABLE
CREATE TABLE
INSERT 0 5000000
postgres=# analyze t1;
ANALYZE
postgres=# select * from pg_size_pretty ( pg_total_relation_size ('t1') );
 pg_size_pretty 
----------------
 173 MB
(1 row)

So now we have a table containing some data. The only choice PostgreSQL has to fetch one or more rows is to use a sequential scan (which is a full table scan in Oracle):

postgres=# explain select * from t1 where a = 5;
                             QUERY PLAN                              
---------------------------------------------------------------------
 Gather  (cost=1000.00..49165.77 rows=1 width=4)
   Workers Planned: 2
   ->  Parallel Seq Scan on t1  (cost=0.00..48165.67 rows=1 width=4)
         Filter: (a = 5)
(4 rows)

Although PostgreSQL already knows that only one row needs to be returned (rows=1) it still needs to read the whole table. Lets look at how that looks like when we really execute the query by using “explain (analyze)”:

postgres=# explain (analyze) select * from t1 where a = 5;
                                                    QUERY PLAN                                                     
-------------------------------------------------------------------------------------------------------------------
 Gather  (cost=1000.00..49165.77 rows=1 width=4) (actual time=133.292..133.839 rows=1 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   ->  Parallel Seq Scan on t1  (cost=0.00..48165.67 rows=1 width=4) (actual time=110.446..124.888 rows=0 loops=3)
         Filter: (a = 5)
         Rows Removed by Filter: 1666666
 Planning time: 0.055 ms
 Execution time: 135.465 ms
(8 rows)

What kicked in here is parallel query which is available since PostgreSQL 9.6 but this is not really important for the scope of this post. Coming back to the invisible or hypothetical indexes: Having the extension installed we can now do something like this:

postgres=# SELECT * FROM hypopg_create_index('CREATE INDEX ON t1 (a)');
 indexrelid |     indexname     
------------+-------------------
      16399 | btree_t1_a
(1 row)

postgres=# select * from pg_size_pretty ( pg_total_relation_size ('t1') );
 pg_size_pretty 
----------------
 173 MB
(1 row)

What this did is to create a hypothetical index but without consuming any space (pg_total_relation_size counts the indexes as well), so it is pretty fast. What happens to our query now?

postgres=# explain select * from t1 where a = 5;
                                   QUERY PLAN                                    
---------------------------------------------------------------------------------
 Index Only Scan using btree_t1_a on t1  (cost=0.06..8.07 rows=1 width=4)
   Index Cond: (a = 5)
(2 rows)

Quite cool, the index is really getting used and we did not consume any resources for the index itself. Could be a good index to implement. What you need to know is, that this does not work for “explain analyze” as this really executes the query (and we do not really have an index on disk):

postgres=# explain (analyze) select * from t1 where a = 5;
                                                    QUERY PLAN                                                     
-------------------------------------------------------------------------------------------------------------------
 Gather  (cost=1000.00..49165.77 rows=1 width=4) (actual time=76.247..130.235 rows=1 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   ->  Parallel Seq Scan on t1  (cost=0.00..48165.67 rows=1 width=4) (actual time=106.861..124.252 rows=0 loops=3)
         Filter: (a = 5)
         Rows Removed by Filter: 1666666
 Planning time: 0.043 ms
 Execution time: 131.866 ms
(8 rows)

If you want to list all the hypothetical indexes you can do this as well:

 indexrelid |     indexname     | nspname | relname | amname 
------------+-------------------+---------+---------+--------
      16399 | btree_t1_a | public  | t1      | btree
(1 row)

Of course you can drop them when not anymore required:

postgres=# select * from  hypopg_drop_index(16399);
 hypopg_drop_index 
-------------------
 t
(1 row)

postgres=# SELECT * FROM hypopg_list_indexes();
 indexrelid | indexname | nspname | relname | amname 
------------+-----------+---------+---------+--------
(0 rows)

Hope this helps …

 

Cet article Can I do it with PostgreSQL? – 15 – invisible indexes est apparu en premier sur Blog dbi services.

Log Buffer #516: A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2017-07-12 11:02

This Log Buffer Edition covers Oracle, SQL Server and MySQL.

Oracle:

12.2 New Feature: the FLEX ASM disk group part 2

Oracle ASM in Azure corruption – follow up

Set-based processing

ADF 12c BC Proxy User DB Connection and Save Point Error

Enabling A Modern Analytics Platform

SQL Server:

Batch SSIS pkg execution from Business Intelligence Development Studio

Find Database Connection Leaks in Your Application

Troubleshooting CPU Performance on VMware

SQLskills Wait Types Library now shows SentryOne data

PowerShell Tool Time: The Tool Framework

MySQL:

Installing Zabbix into Azure using a MySQL PaaS

Streaming Global Cyber Attack Analytics with Tableau and Python

Thread Statistics and High Memory Usage

On slave_parallel_workers and the logical clock

RDS / Aurora OS monitoring with Monyog v8.1.0

Categories: DBA Blogs

Profiling a Java + JDBC Application

Kris Rice - Wed, 2017-07-12 09:57
NetBeans First, there's NO Java coding needed nor Java source code needed to profile a Java program this way.  NetBeans added this a while back up I just found it recently.  The ability to attach to any Java program and profile the SQL going across JDBC. The dev team's blog on it is here: http://jj-blogger.blogspot.nl/2016/05/netbeans-sql-profiler-take-it-for-spin.html SQLcl SQLcl is our

Video: Microservices and Modern Software Development

OTN TechBlog - Wed, 2017-07-12 07:00

Microservices are pretty much a done deal, according to Mark Cavage VP or Software Development for Oracle. "I think almost everybody out there admits that in some part of their organization they're going to build a microservices-based application. Across the board. That's a given." It's also a given that Docker is part of the plan. "Docker is the the fundamental technology used to encapsulate an application and ship it from laptop through testing through production."

Mark and his colleague Chad Arimura, also an Oracle VP of Software Develpment, stopped by the DevLIVE set at Oracle Code Atlanta to recap their keynote session, "Microservices: Where are We, and How Did We Get Here," and chat about containers, Kubernetes, Werker, serveless architectures, and a whole lot more. Watch the interview.

Related Content

 

Ever Evolving SQL*Plus 12.2.0.1 Adds New Performance Features

Christopher Jones - Wed, 2017-07-12 03:40

This is a guest post by Luan Nim, Senior Development Manager at Oracle.

SQL*Plus 12.2.0.1 has introduced a number of features to improve the performance and ease of use in general. These features can be enabled with SET commands, or via the command line.

New Oracle SQL*Plus 12.2.0.1 features include:

  • SET MARKUP CSV

    This option lets you generate output in CSV format. It also lets you choose the delimiter character to use and enable quotes ON or OFF around data. The benefit of using CSV format is that it is fast. This option improves the performance for querying large amount of data where formatted output is not needed.

    Syntax:

    SET MARKUP CSV ON [DELIMI[TER] character] [QUOTE {ON|OFF}]

    Example:

    SQL> set markup csv on SQL> select * from emp; "EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO" 7369,"SMITH","CLERK",7902,"17-DEC-80",800,,20 7499,"ALLEN","SALESMAN",7698,"20-FEB-81",1600,300,30 7521,"WARD","SALESMAN",7698,"22-FEB-81",1250,500,30 7566,"JONES","MANAGER",7839,"02-APR-81",2975,,20 7654,"MARTIN","SALESMAN",7698,"28-SEP-81",1250,1400,30 7698,"BLAKE","MANAGER",7839,"01-MAY-81",2850,,30 7782,"CLARK","MANAGER",7839,"09-JUN-81",2450,,10 7788,"SCOTT","ANALYST",7566,"19-APR-87",3000,,20 7839,"KING","PRESIDENT",,"17-NOV-81",5000,,10 7844,"TURNER","SALESMAN",7698,"08-SEP-81",1500,0,30 7876,"ADAMS","CLERK",7788,"23-MAY-87",1100,,20 7900,"JAMES","CLERK",7698,"03-DEC-81",950,,30 7902,"FORD","ANALYST",7566,"03-DEC-81",3000,,20 7934,"MILLER","CLERK",7782,"23-JAN-82",1300,,10 14 rows selected.

    This option is also available from command line with the "-m csv" argument.

    $ sqlplus –m “csv on” scott/tiger @emp.sql SQL*Plus: Release 12.2.0.2.0 Development on Wed Jul 5 23:12:14 2017 Copyright (c) 1982, 2017, Oracle. All rights reserved. Last Successful login time: Wed Jul 05 2017 23:11:46 -07:00 Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.2.0 - 64bit Development "EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO" 7369,"SMITH","CLERK",7902,"17-DEC-80",800,,20 7499,"ALLEN","SALESMAN",7698,"20-FEB-81",1600,300,30 7521,"WARD","SALESMAN",7698,"22-FEB-81",1250,500,30 7566,"JONES","MANAGER",7839,"02-APR-81",2975,,20 7654,"MARTIN","SALESMAN",7698,"28-SEP-81",1250,1400,30 7698,"BLAKE","MANAGER",7839,"01-MAY-81",2850,,30 7782,"CLARK","MANAGER",7839,"09-JUN-81",2450,,10 7788,"SCOTT","ANALYST",7566,"19-APR-87",3000,,20 7839,"KING","PRESIDENT",,"17-NOV-81",5000,,10 7844,"TURNER","SALESMAN",7698,"08-SEP-81",1500,0,30 7876,"ADAMS","CLERK",7788,"23-MAY-87",1100,,20 7900,"JAMES","CLERK",7698,"03-DEC-81",950,,30 7902,"FORD","ANALYST",7566,"03-DEC-81",3000,,20 7934,"MILLER","CLERK",7782,"23-JAN-82",1300,,10 14 rows selected.
  • SET FEEDBACK ONLY

    The new ONLY option to SET FEEDBACK is to display the number of rows selected without displaying data. This is useful for users who are interested in measuring the time taken to fetch data from the database, without actually displaying that data.

    Example:

    SQL> set feedback only SQL> select * from emp; 14 rows selected.
  • SET STATEMENTCACHE

    This option is to cache executed statements in the current session. The benefit of this setting is that it reduces unnecessary parsing time for the same query. Therefore it improves performance when repeatedly executing a query in a session.

    Example:

    SQL> set statementcache 20 SQL> select * from emp; SQL> select * from emp;
  • SET LOBPREFETCH

    This option is to improve access of smaller LOBs where LOB data is prefetched and cached. The benefit of this setting is to reduce the number of network round trips to the server, allowing LOB data to be fetched in one round trip when LOB data is within the LOBPREFETCH size defined.

    Example:

    SQL> set lobprefetch 2000 SQL> select * from lob_tab;
  • SET ROWPREFETCH

    This option is to minimize server round trips in a query. The data is prefetched in a result set rows when executing a query. The number of rows to prefetch can be set using this SET ROWPREFETCH

    This option can reduce round trips by allowing Oracle to transfer query results on return from its internal OCI execute call, removing the need for the subsequent internal OCI fetch call to make another round trip to the DB.

    Example:

    SQL> set rowprefetch 20 SQL> Select * from emp;

    If, for example, you expect only a single row returned, set ROWPREFETCH to 2, which allows Oracle to get the row efficiently and to confirm no other rows need fetching.

  • Command line –FAST option.

    This command line option improves performance in general. When this option is used, it changes the following SET options to new values:

    • ARRAYSZE 100

    • LOBPREFETCH 16384

    • PAGESIZE 50000

    • ROWPREFETCH 2

    • STATEMENTCACHE 20

    Once logged in, these setting can also be changed manually.

    Syntax:

    $ sqlplus –f @emp.sql

I hope the new features described above are helpful to you. For more information, please refer to the SQL*Plus Users Guide and Reference.

If you have questions about SQL, PL/SQL or SQL*Plus, post them in the appropriate OTN space.

CPADMIN Utility for Managing Concurrent Processing Available for EBS 12.1

Steven Chan - Wed, 2017-07-12 02:00

I recently profiled a new CPADMIN command line utility for E-Business Suite 12.2.  That same tool is also available for EBS 12.1.3.  This tool consolidates various utilities for concurrent processing into a single menu-based utility. This ADADMIN-style utility can be used for multiple tasks, including:

  • View Concurrent Manager status
  • Clean CP tables
  • Set Concurrent Manager diagnostics
  • Start, stop, or verify an individual Concurrent Manager
  • Rebuild Concurrent Manager views
  • Move request files
  • Analyze requests
  • Configure request log/out file directory locations

Details for running the CPADMIN utility are published here:

This rollup patchset (RUP) also includes fixes for the following Concurrent Processing issues:

  • Bug 12821441 : JAVA CONCURRENT PROGRAMS DOES NOT ALWAYS WRITE OUTPUT VIA FND_FILE
  • Bug 14828523 : PREREQ PATCH FOR AFCMGR.ODF
  • Bug 14841198 : IPP PRINTER OPTIONS SET INCORRECTLY FOR DELIVERY
  • Bug 15981176 : ISSUES AFTER APPLYING FAILOVER PATCH 14828518:R12.FND.B
  • Bug 16602978 : STANDARD MANAGER ACTUAL AND TARGET PROCESSES ARE DIFFERENT.
  • Bug 17287546 : UNABLE TO SELECT AM/PM WHEN TRYING TO SCHEDULE CONCURRENT REQUESTS
  • Bug 17075600 : CONCURRENT REQUEST NOTIFY PART DOESN'T WORK SOMETIMES
  • Bug 18455555 : FNDCRM DOWN WHEN FND_CONFLICT_DOMAIN CONTAINS 65K+ ROWS
  • Bug 19479047 : R12.2.3 UPGR : INTERNAL CONCURRENT MANAGER CORE DUMPS AFTER ADOP CUTOVER
  • Bug 19887645 : ALTERING A SCHEDULE DATE GETS TODAY'S DATE RATHER THAN DATE ENTERED
  • Bug 20013011 : AFTER STEP 6.2.7 IS RUN ( NOTE 1070033.1 ) , WORKSHIFTS OF INTERNAL MONITOR IS W
  • Bug 20691679 : CANNOT INPUT REQUEST NAME WHEN SCHEDULING REQUEST 
  • Bug 21101859 : WHEN USING MLS FUNCTION XML REP DOES NOT PICK TRANSLATION
  • Bug 21251552 : CONCURRENT REQUEST VIEW PAGE FNDCPREQUESTVIEWPAGE SHOWS ALL REQUESTS OF OTHER US
  • Bug 21256003 : COMPETION NOTIFICATION SHOWING JUNK CHARACTERS WHEN
  • Bug 21385444 : AFCMGR.ODF FAILS WITH UNABLE TO COMPARE OR CORRECT TABLES OR INDEXES OR KEYS
  • Bug 21386094 : HIDDEN PARAMETERS NOT PASSED VIA SCHEDULE REQUEST PAGE - CPPROGRAMPG
  • Bug 21782311 : WHEN CANCEL A RUNNING REQUEST ON OAF STATUS GOT CANCELLED NOT TERMINATED

Related Articles

 

Categories: APPS Blogs

Is there a possibility to use db_link dynamically without using cursor and execute immediate?

Tom Kyte - Tue, 2017-07-11 19:06
Hi, I would like to know if I am able to implement db_link dynamically without using cursor or execute immediate? I have 2 tables stored in different location which are accessible via db_link. These 2 tables are identical in structure and the data c...
Categories: DBA Blogs

How to use db_link dinamically

Tom Kyte - Tue, 2017-07-11 19:06
Hi Tom, Hope everything is O.K. for you ...... You know, I am extracting segment_name information for several databases and I am inserting information in a repository table. I am using next cursor to look for every database: FOR x in (SEL...
Categories: DBA Blogs

SP2-0308: cannot close spool file

Tom Kyte - Tue, 2017-07-11 19:06
After run a script for check my db tablespaces, the error log is SP2-0308: cannot close spool file, how to solve the problem? + + date +%Y%m%d today=20170710 + . /export/home/oracle/.profile + 1> /dev/null 2>& 1 + sqlplus -s /as sysdba + 0<< ...
Categories: DBA Blogs

Get previous non-null value of a column

Tom Kyte - Tue, 2017-07-11 19:06
Hi Team, i have requirement like this , can you help me on this <code>col1 col2 wk6 1 wk5 null wk4 3 wk3 null wk2 null wk1 5</code> i need o/p like below . whenever null value will come it should take it ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator