Feed aggregator

update rows from multiple tables (correlated update)

Learn DB Concepts with me... - Fri, 2017-07-14 10:01

Cross table update (also known as correlated update, or multiple table update) in Oracle uses non-standard SQL syntax format (non ANSI standard) to update rows in another table. The differences in syntax are quite dramatic compared to other database systems like MS SQL Server or MySQL.
In this article, we are going to look at four scenarios for Oracle cross table update.

Suppose we have two tables Categories and Categories_Test. See screenshots below.

lets take two tables TABA & TABB:

Records in TABA:















Records in TABB:













1. Update data in a column LNAME in table A to be upadted with values from common column LNAME in table B.

The update query below shows that the PICTURE column LNAME is updated by looking up the same ID value in ID column in table TABA and TABB.

 update TABA A
set (a.LNAME) = (select B.LNAME FROM TABB B where A.ID=B.ID);















2. Update data in two columns in table A based on a common column in table B.

If you need to update multiple columns simultaneously, use comma to separate each column after the SET keyword.

update TABA A
set (a.LNAME, a.SAL) = (select B.LNAME, B.SAL FROM TABB B where A.ID=B.ID);



Categories: DBA Blogs

Can I do it with PostgreSQL? – 16 – DDL triggers

Yann Neuhaus - Fri, 2017-07-14 09:52

A question I received recently from a customer: Oracle gives you the possibility to create DDL triggers. DDL triggers fire (as the name implies) when DDL events occur. Can we do that in PostgreSQL? Yes, this feature is called event triggers. There is a nice list in the documentation which lists for which DDLs an event trigger can fire: Event Trigger Firing Matrix. Lets go …

As usual lets create a dummy table we can work with:

postgres=# create table t1 ( a int );
CREATE TABLE
postgres=# \d t1
      Table "public.t1"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

What we want to do is to prevent any modifications of the table structure, how do we do this? Obviously we need a way to catch the alter statement against our table and then raise an exception displaying some text. What we need to do is to create a function which returns the pseudo type “event_trigger”:

CREATE OR REPLACE FUNCTION no_ddl() RETURNS event_trigger AS $$
DECLARE r RECORD;
BEGIN
    FOR r IN SELECT * FROM pg_event_trigger_ddl_commands() LOOP
        RAISE EXCEPTION 'You are not allowed to change %', r.object_identity;
    END LOOP;
END;
$$
LANGUAGE plpgsql;

What this function is doing is to iterate over the result set of pg_event_trigger_ddl_commands and then raises an exception. In addition to that we need the event trigger that calls the function:

CREATE EVENT TRIGGER no_ddl_allowed
  ON ddl_command_end WHEN TAG IN ('ALTER TABLE')
  EXECUTE PROCEDURE no_ddl();

Lets see if it works:

postgres=# alter table t1 add column g text;
ERROR:  You are not allowed to change public.t1
CONTEXT:  PL/pgSQL function no_ddl() line 5 at RAISE

Cool, but there is an issue with the current implementation:

postgres=# create table t2 ( a int );
CREATE TABLE
postgres=# alter table t2 add column b text;
ERROR:  You are not allowed to change public.t2
CONTEXT:  PL/pgSQL function no_ddl() line 5 at RAISE

What we effectively did is to deny all alter statements for all objects in that database. This is probably not what you want. A better approach is this:

CREATE OR REPLACE FUNCTION no_ddl() RETURNS event_trigger AS $$
DECLARE r RECORD;
BEGIN
    FOR r IN SELECT * FROM pg_event_trigger_ddl_commands() LOOP
      IF ( r.objid::regclass::text = 't1' )
      THEN
            RAISE EXCEPTION 'You are not allowed to change %', r.object_identity;
      END IF;
    END LOOP;
END;
$$
LANGUAGE plpgsql;

This way we are only raising the exception when the table “t1″ is involved and do nothing for all other tables:

postgres=# alter table t2 add column b text;
ALTER TABLE
postgres=# alter table t1 add column b text;
ERROR:  You are not allowed to change public.t1

Hope this helps …

 

Cet article Can I do it with PostgreSQL? – 16 – DDL triggers est apparu en premier sur Blog dbi services.

Video: Discover Graal: Open Source Polyglot Runtime Environment

OTN TechBlog - Fri, 2017-07-14 08:27

Have you discovered Graal? It's a new open source project created by Oracle Labs. "Graal allows you to deploy virtually any language in a single environment and actually deploy multiple languages in that same environment," explains Scott Lynn in this interview. "So you can have a JavaScript front end that talks to an R back-end that's talking to a database, for example. And there isn't the normal requirement of creating, say, a JSON file to actually send a command over to the R engine and then have the R engine execute and send a JSON file back with the data in it. You can literally just transfer the objects back and forth, because they're in the same language environment."

Scott, director of product strategy for Oracle Linux, delivered the session "Polyglot Development and Deployment Through Language Environment Virtualization" at the Oracle Code event in Atlanta, GA on June 22, 2017. He took a break from prepping for his session to talk with me about Graal and invite developers to get involved in this open source project, available on GitHub. Watch the video!

BTW: Scott will present a session on Graal at the Oracle Code event in Sydney, Australia on Tuesday July 18, 2017. If you'll be in the neighborhood, there's still time to register.

 

Additional Resources

What are typed tables in PostgreSQL?

Yann Neuhaus - Thu, 2017-07-13 15:29

While reading the PostgreSQL documentation for “CREATE TABLE” I came across this:

“OF type_name: Creates a typed table, which takes its structure from the specified composite type (name optionally schema-qualified). A typed table is tied to its type; for example the table will be dropped if the type is dropped (with DROP TYPE … CASCADE).”

Sounds interesting, lets have a look.

Obviously we’ll need a composite type to make use of the feature described above:

postgres=# create type ctyp1 as ( a int, b varchar(10), c date );
CREATE TYPE
postgres=# \d ctyp1
       Composite type "public.ctyp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 

Using the “CREATE TABLE” statement we can now create a table which is based on that type:

postgres=# create table t1 of ctyp1;
CREATE TABLE
postgres=# \d t1
             Table "public.t1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
Typed table of type: ctyp1

But is that useful? First I thought no, but there is at least one use case for which this is a great help. Image you need another table with the same structure, maybe in another schema. All you need to do is:

postgres=# create table t2 of ctyp1;
CREATE TABLE
postgres=# \d t2
             Table "public.t2"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
Typed table of type: ctyp1

Not much of a help you might think as you can just create another table the usual way by specifying all the columns once more, but wait: How would you manage changes to the table structure then? Having both tables based on the same type gives you this:

postgres=# alter type ctyp1 add attribute d numeric cascade;
ALTER TYPE
postgres=# \d ctyp1
       Composite type "public.ctyp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
 d      | numeric               | 

The magic is in the keyword “cascade”. What happened is that both our tables now look like this:

postgres=# \d t1
             Table "public.t1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
 d      | numeric               | 
Typed table of type: ctyp1

postgres=# \d t2
             Table "public.t2"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 
 c      | date                  | 
 d      | numeric               | 
Typed table of type: ctyp1

With one command we changed the structure of all the tables which are based on the composite type. Btw: When you skip the “cascade” keyword you’ll get an error:

postgres=# alter type ctyp1 add attribute e numeric;
ERROR:  cannot alter type "ctyp1" because it is the type of a typed table
HINT:  Use ALTER ... CASCADE to alter the typed tables too.

Can be quite useful…Good to know that this is possible.

 

Cet article What are typed tables in PostgreSQL? est apparu en premier sur Blog dbi services.

Possible solution for TLS 1.2 issues between Windows 10 and Oracle EPM Weblogic

Tim Tow - Thu, 2017-07-13 15:17

We have recently seen some users of both Dodeca and Hyperion products where Windows 10 machines have issues connecting to the Weblogic servers shipped with Oracle EPM due to the absence of the TLS 1.2 protocol.  The underlying issue is that Windows 10 is an evolution of technology whereas Oracle EPM Weblogic, and more specifically the Java version tested and shipped with it, are stuck in the stone age.  Java 1.6 started its journey to "end of life" in late 2013 and, though it continues to be covered under Extended Support, the EPM team has not delivered an update for their server.  Ironically, there is even a Java 1.6 version, Update 121, that now supports TLS 1.2; EPM is on Update 35.

So, what do you do?  I would be very hesitant to upgrade the Java version delivered with the EPM System.  After all, Oracle spent a lot of time working to certify on that version of Java.  One of our Senior Support Engineers, Jay Zuercher, did find something that appears to work - it hasn't yet been widely tested but may be worth a try.  Here are the steps he followed:

  1. Login to the Weblogic console.
  2. Navigate to Environment->Servers->AnalyticProviderServices0 (or to the server in which you are attempting to connect).
  3. Click on the SSL tab and expand the Advanced section at the bottom.
  4. Enable the “Use JSSE SSL” checkbox.
  5. Save changes.
  6. Navigate to the Server Start tab.
  7. Add the following string to the “Arguments” box:
    1. -Dweblogic.security.SSL.protocolVersion=TLS1
  8. Save changes.
  9. Activate all changes.
  10. Restart the applicable service. 
These steps are furnished with no guarantees, but hopefully you will find them helpful.


Categories: BI & Warehousing

Dashboards for Credit Unions

Nilesh Jethwa - Thu, 2017-07-13 11:25

Business intelligence tools will thrive wherever there is available data that is reliable and accessible. When it comes to BI tools, these work not only by recording and reporting gains, losses, or delinquencies but also aid its users in finding potential opportunities in their industry.

Performance dashboards and other BI tools are increasing in popularity especially for credit unions. This is because these tools help organizations to assess risks aligned with loan portfolios. These also aid them in making informed strategic decisions.

credit union dashboard does more than help the organization manage gains, losses, or delinquencies. It provides more data other than your standard operational reports. They also help measure potential investment opportunities, a rather popular banking KPI.

But for the banking KPI to be of use to the credit union, data should be available to all personnel from the bottom-line employees to the CEOs. And when it comes to data collection, although credit unions have working protocols in place, there are still several challenges that come with the territory.

If the BI tools that they are utilizing don’t have the ability to store long data histories then it would not be easy for these tools to provide its users with historic trends. As such, they’ll require the services of third party agencies, which may provide them with the reports that they need but not in formats that they’ll find usable.

There is also more than one banking KPI that credit unions need to focus on. With the help of a reliable credit union dashboard, they can manage all of these with ease. From updated credit scores to collateral values, credit unions will be able to leverage these pieces of information.

When working with a credit union dashboard, it is important for the organization to consider which particular key performance indicators should be included in their dashboards. This ensures that they will be able to assess relevant data whenever doing so is necessary.

In order to determine the right metrics to focus on, they should discuss their plans, strategies, and goals that they wish to achieve. If credit unions are fully aware of what they want to gain from their investments, they can customize their dashboards to help them make informed decisions from the get go.

Built with ❤️ using Oracle Application Express (APEX)

Joel Kallman - Thu, 2017-07-13 10:59
I couldn't get to my keyboard fast enough, to write this blog post.  Shakeeb Rahman showed me something last night that I wanted to share with the awesome APEX community as soon as possible.

APEX is used in literally thousands of applications within Oracle.  And in some of the latest apps that we're writing ourselves, in the footer we're including a short phrase which lets people know it was "built with Oracle APEX."  We're proud of the app, proud of the UI, and we want people in the company to know that it's an APEX app.  But Shakeeb showed me something last night that took this to a whole new level!



The instructions to add this to your APEX 5.1 application are extraordinarily simple:

  1. Create a new region on the Global Page (page 0) and set the Region Position to Footer. Set the Region Template to Blank with Attributes (no grid).

  2. Use this HTML as the Region Source:
    <span class="footer-apex">Built with 
    <span class="fa fa-heart">
    <span class="u-VisuallyHidden">love</span>
    </span>
    using <a href="https://apex.oracle.com/" target="_blank" title="Oracle Application Express">Oracle APEX</a>
    </span>
  3. Add this CSS to your Theme Style by opening Theme Roller and pasting into the Custom CSS section:
    .footer-apex {font-size: 11px; line-height: 16px; display: inline-block; vertical-align: top;}
    .footer-apex .fa.fa-heart { vertical-align: top; font-size: 10px;
    line-height: 16px; width: 16px; text-align: center;
    transition: color 1s ease; }
    .footer-apex:hover .fa.fa-heart { color: #FF0000; animation: pulse 1s infinite; }

    @keyframes pulse {
    0% { transform: scale(0.9); }
    70% { transform: scale(1.25); }
    100% { transform: scale(0.9); }
    }

That's it!  Simple.  It might work in APEX 5.0 and later, but it will definitely work in APEX 5.1 and later.

I encourage everyone in the APEX community to add this to the footer of their applications.  Many end users don't even know they're using an APEX application, or even what APEX is.  This is an easy way to show it, and and show it with style!

BOOM!


Did you ever wonder what PostgreSQL is executing in the background when you use the psql shortcuts?

Yann Neuhaus - Thu, 2017-07-13 08:19

When you work with PostgreSQL you are probably using psql and when you use psql you are probably using one or more of the shortcuts psql is providing. These shortcuts provide a quick and convenient way to get meta data out of the PostgreSQL catalog which safes you from a lot of typing and typos. The only issue with that is that it hides the statements which are executed to get the meta data so you don’t know were the information is actually coming from. Of course you can check either the information_schema or the PostgreSQL system catalog and then write your own queries for what you are looking for. But, hey, there is a much easier way.

Lets start by creating a dummy table and an index:

postgres=# create table dummy ( a int primary key, b varchar(50), c timestamp with time zone );
CREATE TABLE
postgres=# create index i_dummy on dummy ( c );
CREATE INDEX
postgres=# 

The fastest way to get the definition of the table is:

postgres=# \d dummy
             Table "public.dummy"
 Column |           Type           | Modifiers 
--------+--------------------------+-----------
 a      | integer                  | not null
 b      | character varying(50)    | 
 c      | timestamp with time zone | 
Indexes:
    "dummy_pkey" PRIMARY KEY, btree (a)
    "i_dummy" btree (c)

As you can see you do not only get the definition of the table itself but also information about the primary key and the index. But where does this information come from? As information about the index and the primary key is displayed as well the information must be coming from more than one catalog table, but which? Quite easy when you check the options of psql:

postgres@pgbox:/home/postgres/ [PG962] psql --help | grep "hidden"
  -E, --echo-hidden        display queries that internal commands generate

When you fire up psql with this option all the internal statements will be displayed when you use a short cut:

postgres@pgbox:/home/postgres/ [PG962] psql -E postgres
postgres=# \d dummy
********* QUERY **********
SELECT c.oid,
  n.nspname,
  c.relname
FROM pg_catalog.pg_class c
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname ~ '^(dummy)$'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 2, 3;
**************************

********* QUERY **********
SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, c.relhastriggers, c.relrowsecurity, c.relforcerowsecurity, c.relhasoids, '', c.reltablespace, CASE WHEN c.reloftype = 0 THEN '' ELSE c.reloftype::pg_catalog.regtype::pg_catalog.text END, c.relpersistence, c.relreplident
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)
WHERE c.oid = '16679';
**************************

********* QUERY **********
SELECT a.attname,
  pg_catalog.format_type(a.atttypid, a.atttypmod),
  (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128)
   FROM pg_catalog.pg_attrdef d
   WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),
  a.attnotnull, a.attnum,
  (SELECT c.collname FROM pg_catalog.pg_collation c, pg_catalog.pg_type t
   WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND a.attcollation  t.typcollation) AS attcollation,
  NULL AS indexdef,
  NULL AS attfdwoptions
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = '16679' AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;
**************************

********* QUERY **********
SELECT c2.relname, i.indisprimary, i.indisunique, i.indisclustered, i.indisvalid, pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),
  pg_catalog.pg_get_constraintdef(con.oid, true), contype, condeferrable, condeferred, i.indisreplident, c2.reltablespace
FROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i
  LEFT JOIN pg_catalog.pg_constraint con ON (conrelid = i.indrelid AND conindid = i.indexrelid AND contype IN ('p','u','x'))
WHERE c.oid = '16679' AND c.oid = i.indrelid AND i.indexrelid = c2.oid
ORDER BY i.indisprimary DESC, i.indisunique DESC, c2.relname;
**************************

********* QUERY **********
SELECT pol.polname,
CASE WHEN pol.polroles = '{0}' THEN NULL ELSE array_to_string(array(select rolname from pg_roles where oid = any (pol.polroles) order by 1),',') END,
pg_catalog.pg_get_expr(pol.polqual, pol.polrelid),
pg_catalog.pg_get_expr(pol.polwithcheck, pol.polrelid),
CASE pol.polcmd 
WHEN 'r' THEN 'SELECT'
WHEN 'a' THEN 'INSERT'
WHEN 'w' THEN 'UPDATE'
WHEN 'd' THEN 'DELETE'
WHEN '*' THEN 'ALL'
END AS cmd
FROM pg_catalog.pg_policy pol
WHERE pol.polrelid = '16679' ORDER BY 1;
**************************

********* QUERY **********
SELECT c.oid::pg_catalog.regclass FROM pg_catalog.pg_class c, pg_catalog.pg_inherits i WHERE c.oid=i.inhparent AND i.inhrelid = '16679' ORDER BY inhseqno;
**************************

********* QUERY **********
SELECT c.oid::pg_catalog.regclass FROM pg_catalog.pg_class c, pg_catalog.pg_inherits i WHERE c.oid=i.inhrelid AND i.inhparent = '16679' ORDER BY c.oid::pg_catalog.regclass::pg_catalog.text;
**************************

             Table "public.dummy"
 Column |           Type           | Modifiers 
--------+--------------------------+-----------
 a      | integer                  | not null
 b      | character varying(50)    | 
 c      | timestamp with time zone | 
Indexes:
    "dummy_pkey" PRIMARY KEY, btree (a)
    "i_dummy" btree (c)

Here you go. Quite a lot of stuff is happening in the background and you can exactly see what it is. This is a great way to get known to the catalog. When you are already inside psql and want to switch the display of the hidden stuff to on you can do that as well:

postgres=# \set ECHO_HIDDEN on
postgres=# \d dummy
********* QUERY **********
SELECT c.oid,
  n.nspname,
  c.relname
FROM pg_catalog.pg_class c
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname ~ '^(dummy)$'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 2, 3;
**************************

...

When you want to make this permanent add it to your .psqlrc (scroll down to the “Files” section). Have fun …

 

Cet article Did you ever wonder what PostgreSQL is executing in the background when you use the psql shortcuts? est apparu en premier sur Blog dbi services.

SQL Developer version confusion 4.2 vs 17.2

Tom Kyte - Thu, 2017-07-13 07:46
I am somewhat confused about sql developer versions - the latest I have installed from OTN is 4.2 and then I come across - ThatJeffSmith - SQL Developer v17.2 is now Available - Please clarify. Thanks.
Categories: DBA Blogs

SQL Time Remaining

Tom Kyte - Thu, 2017-07-13 07:46
Hi Tom, I use v$session_longops for estimating the time remaining on a long running sql. I have noticed recently a sql that was running for more than 2 hours was not showing up in v$session_longops and the only wait event i see was DB File Sequent...
Categories: DBA Blogs

OSGi Support in Oracle Weblogic Server

Starting with 12.1.2 , Weblogic supports OSGi. I wanted to blog about this since I got this question several time in various events. OSGi is a modularity system in Java that is maintained by OSGi...

We share our skills to maximize your revenue!
Categories: DBA Blogs

One-day workshop with Slava Oks for SQL Server on Linux

Yann Neuhaus - Thu, 2017-07-13 05:26

Few weeks ago, David Barbarin and I went to the Tugar IT  in Lisbon to follow the workshop from Slava Oks: Bringing SQL Server to Linux: Experience

Linux03

It was a deep dive into the code of SQL Server and Slava shows us how they achieve a universal platform regardless the operating system.

On my mind, the challenge was how to deal with the Windows operating system dependencies that include all Win32/ NT Kernel calls and makes SQL Server OS agnostic

Indeed, SQL Server has about more than 800 calls to Win32 and more than 400 calls to NT Kernel through the SQL OS. Slava explained us how challenging it could be to rewrite all the SQL Server code to support the Linux operating system.

To achieve it, the development Team implemented a containerized approach called Drawbridge.

Drawbridge combines 2 technologies:

  • Picoprocess: Process-based isolation container with a minimal kernel API Surface
  • Library OS: Version of Windows enlightened to run efficiently

Here a link for more information about Drawbridge: https://www.microsoft.com/en-us/research/project/drawbridge/#

This technology reduces to 45 calls to the host OS through a proper Platform Abstraction Layer(PAL): SQLPAL.

Linux01Source: Microsoft

SQLPAL manages all resources’ in the process. The goal is to merge SOS and Library OS as the core of SQLPAL.

Then, we get deeper into the SQLPAL code and Slava began a complex topic of debugging stuff…I must admit it was a headache moment! :-?

But finally, I may conclude that it was a very interesting Workshop with a lot of explanations about how Microsoft SQL Server team addressed the challenge of making SQL Server compatiblewith Linux. This helps me to understand the architecture of the next version of SQL Server. 8-)

And it was a nice day in Lisbon in the Microsoft building!

IMG_0782(1)

 

Cet article One-day workshop with Slava Oks for SQL Server on Linux est apparu en premier sur Blog dbi services.

Azure Compare Total Cost of Ownership (TCO) with comparable Azure deployment

Yann Neuhaus - Thu, 2017-07-13 05:18

Microsoft Azure allows you to quickly deploy infrastructures and services to meet all of your business needs. You can run Windows and Linux based applications in 36 Azure datacenter regions, delivered with enterprise grade SLAs.

Microsoft Azure offer you to Quickly compare the total cost of ownership (TCO) of your on-premises infrastructure with a comparable Azure deployment using the  TCO Calculator and estimate savings you can realize by moving to Azure.

In this article i will show you  a little example on how to calculate the TCO on a small architecture.

To access to the TCO calculator, click on this link : TCO Calculator,  click on the TCO calculator image in red rectangle and follow instructions

TCO

My use case:

This example is based on a small architecture with two Virtual Machines, just to give you an insight into the use of this TCO calculator.

The first step is to enter input parameters

2017-07-13 11_56_10-Azure TCO Calculator _ Microsoft

after clicking on the Calculate button, the result appears:

tco2

tco3

tco4

Details are available by clicking on export to Word on the top frame result.

Conclusion:

To be honest, this calculator may not accurately predict the TCO but it offers a good estimate on the kind of savings one can expect. It is a very good tool for companies looking to move their IT infrastructure to Azure cloud.

imageDbiFin

 

 

 

Cet article Azure Compare Total Cost of Ownership (TCO) with comparable Azure deployment est apparu en premier sur Blog dbi services.

OpenJDK 9: JShell - an interactive java interpreter shell | builtin commands

Dietrich Schroff - Wed, 2017-07-12 14:33
One of the new features of java 9 is jshell (JEP 222).

On my ubuntu system the installation was quite easy:
# apt-get install openjdk-9-jdk-headlessand you can find
$ ls /usr/lib/jvm/java-9-openjdk-amd64/bin/
idlj       jcmd    jmap        jstatd       schemagen
jar        jdb     jmod        keytool      serialver
jarsigner  jdeps   jps         orbd         servertool
java       jhsdb   jrunscript  pack200      tnameserv
javac      jimage  jsadebugd   policytool   unpack200
javadoc    jinfo   jshell      rmic         wsgen
javah      jjs     jstack      rmid         wsimport
javap      jlink   jstat       rmiregistry  xjc
 (in the third column, sixth row: jshell)

After a startup jshell comes up with this prompt:

$ /usr/lib/jvm/java-9-openjdk-amd64/bin/jshell
|  Welcome to JShell -- Version 9-internal
|  For an introduction type: /help intro


->
 The most important command is
/exitto leave the jshell (Strg-C works also, but i think /exit should be used).

There is no syntax highlighting but this does not matter.

The following builtin commands are allowed:
-> /help
|  Type a Java language expression, statement, or declaration.
|  Or type one of the following commands:

|     /list [all|start|]                       -- list the source you have typed
|     /edit                                    -- edit a source entry referenced by name or id
|     /drop                                    -- delete a source entry referenced by name or id
|     /save [all|history|start]                      -- Save snippet source to a file.
|     /open                                          -- open a file as source input
|     /vars                                                -- list the declared variables and their values
|     /methods                                             -- list the declared methods and their signatures
|     /classes                                             -- list the declared classes
|     /imports                                             -- list the imported items
|     /exit                                                -- exit jshell
|     /reset                                               -- reset jshell
|     /reload [restore] [quiet]                            -- reset and replay relevant history -- current or previous (restore)
|     /classpath                                     -- add a path to the classpath
|     /history                                             -- history of what you have typed
|     /help [|]                          -- get information about jshell
|     /set editor|start|feedback|newmode|prompt|format ... -- set jshell configuration information
|     /? [|]                             -- get information about jshell
|     /!                                                   -- re-run last snippet
|     /                                                -- re-run snippet by id
|     /-                                                -- re-run n-th previous snippet

|  For more information type '/help' followed by the name of command or a subject.
|  For example '/help /list' or '/help intro'.  Subjects:

|     intro     -- an introduction to the jshell tool
|     shortcuts -- a description of shortcuts
With /list the source code, which you provided, is shown:
-> /list 5

   5 : class MyClass {
       private int a;
       public MyClass(){a=0;}
       int getA() {return a;};
       void setA(int var) {a=var; return;}
       }

Everytime you create an Object, you will see the following:
-> ZZ = new MyClass();
|  Variable ZZ has been assigned the value MyClass@28d25987

-> ZZ.getA();
|  Expression value is: 0
|    assigned to temporary variable $8 of type int

-> ZZ.setA(200);

-> ZZ.getA();
|  Expression value is: 200
|    assigned to temporary variable $10 of type int With /vars the variables are shown:
-> /vars
|    MyClass ZZ = MyClass@28d25987
|    int $8 = 0
|    int $10 = 200
Listing the classes (ok it is getting boring):
-> /classes
|    class MyClass
 and last but not least /methods:
-> /methods
|    printf (String,Object...)void
|    getA ()int

sqloader load multiple file into 1 table

Tom Kyte - Wed, 2017-07-12 13:26
Hi Tom, I have multiple csv-files in a directory. This directory will be updated with new csv-files. I need to load all csv files with sqloader in 1 table. So all the files have the same columns only different data. This is how my control file...
Categories: DBA Blogs

Create Type by using %type columns

Tom Kyte - Wed, 2017-07-12 13:26
Hi TOM, I want to write an extract utility, which will get data from selected columns of multiple tables so planning to use pipeline function which will return a ORACLE TYPE. To create type, I would like give reference of column type from source...
Categories: DBA Blogs

Difference between stale object result from *_tab_statistics and gather_schema_stat with "LIST STALE"

Tom Kyte - Wed, 2017-07-12 13:26
I am trying to find all stale objects. As I understand there are two ways and both should return same result. Before starting I first did a flush monitoring <code> begin dbms_stats.flush_database_monitoring_info; end; / </...
Categories: DBA Blogs

Pivot with total

Tom Kyte - Wed, 2017-07-12 13:26
<code>create table ticket1 (ticketid number, tcktname varchar2(10), status varchar2(10) ); INSERT INTO ticket1 VALUES (101,'bug','open'); INSERT INTO ticket1 VALUES (102,'bug','close'); INSERT ...
Categories: DBA Blogs

Summarizing data over time - by time interval

Tom Kyte - Wed, 2017-07-12 13:26
Hello I have an application that gathers and stores data over time. Because of the applications reliance on the network and other functions the data is gathered at irregular intervals. example table TimeStamp Object Value --------- ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator