Feed aggregator

Bug in Export Wizard?

Tom Kyte - Wed, 2018-04-18 10:26
When using the export wizard and browsing to an existing file that I want to overwrite with my new data results, I find when I select that specific file from the Export File Chooser window and click SAVE, the file that is actually selected is some ot...
Categories: DBA Blogs

dbms_stats and optimize techniques

Tom Kyte - Wed, 2018-04-18 10:26
I am setting the degree parameter of dbms_stats.gather_table_stat. could any one tell me that how we can calculate the value of this parameter and how this is link with the hint optimize techniques.
Categories: DBA Blogs

Quickly spinning up Docker Containers with baseline Oracle Database Setup – for performing automated tests

Amis Blog - Wed, 2018-04-18 07:00

imageHere is a procedure for running an Oracle Database, preparing a baseline in objects (tables, stored procedures) and data, creating an image of that baseline and subsequently running containers based on that baseline image. Each container starts with a fresh setup. For running automated tests that require test data to be available in a known state, this is a nice way of working.

The initial Docker container was created using an Oracle Database 11gR2 XE image: https://github.com/wnameless/docker-oracle-xe-11g.

Execute this statement on the Docker host:

docker run -d -p 49160:22 -p 49161:1521 -e ORACLE_ALLOW_REMOTE=true --name oracle-xe  wnameless/oracle-xe-11g

This will spin up a container called oracle-xe. After 5-20 seconds, the database is created and started and can be accessed from an external database client.

From the database client, prepare the database baseline, for example:

create user newuser identified by newuser;

create table my_data (data varchar2(200));

insert into my_data values ('Some new data '||to_char(sysdate,'DD-MM HH24:MI:SS'));



These actions represent the complete database installation of your application – that may consists of hundreds or thousands of objects and MBs of data. The steps and the principles remain exactly the same.

At this point, create an image of the baseline – that consists of the vanilla database with the current application release’s DDL and DML applied to it:

docker commit --pause=true oracle-xe

This command returns an id, the identifier of the Docker image that is now created for the current state of the container – our base line. The original container can now be stopped and even removed.

docker stop oracle-xe


Spinning up a container from the base line image is now done with:

docker run -d -p 49160:22 -p 49161:1521 -e ORACLE_ALLOW_REMOTE=true  --name oracle-xe-testbed  <image identifier>

After a few seconds, the database has started up and remote database clients can start interacting with the database. They will find the database objects and data that was part of the baseline image. To perform a test, no additional set up nor any tear down is required.

Perform the tests that require performing. The tear down after the test consists of killing and removing the testbed container:

docker kill oracle-xe-testbed && docker rm oracle-xe-testbed

Now return to the step “Spinning up a container”

Spinning up the container takes a few seconds – 5 to 10. The time is mainly taken up by the database processes that have to be started from scratch.

It should be possible to create a snapshot of a running container (using Docker Checkpoints) and restore the testbed container from that snapshot. This create-start from checkpoint –kill-rm should happen even faster than the run-kill-rm cycle that we have now got going. A challenge is the fact that opening the database does not just start processes and manipulate memory, but also handles files. That means that we need to commit the running container and associate the restored checkpoint with that image. I have been working on this at length – but I have not been successful yet – running into various issues (ORA-21561 OID Generation failed,  ora 27101 shared-memory-realm-does-not-exist, Redo Log File not found,…).I continue to look into this.

Use Oracle Database 12c Image

Note: instead of the Oracle Database XE image used before, we can go through the same steps based for example on image sath89/oracle-12c (see https://hub.docker.com/r/sath89/oracle-12c/ ) .

The commands and steps are now:

docker pull sath89/oracle-12c

docker run -d -p 8080:8080 -p 1521:1521 --name oracle-db-12c sath89/oracle-12c

connect from a client – create baseline.

When the baseline database and database contents has been set up, create the container image of that state:

docker commit --pause=true oracle-db-12c

Returns an image identifier.

docker stop oracle-db-12c

Now to run a test iteration, run a container from the base line image:

docker run -d -p 1521:1521  --name oracle-db-12c-testbed  <image identifier>

Connect to the database at port 1521 or have the web application or API that is being tested make the connection.



The Docker Create Command: https://docs.docker.com/engine/reference/commandline/create/#parent-command

Nifty Docker commands in Everyday hacks for Docker:  https://codefresh.io/docker-tutorial/everyday-hacks-docker/

Circle CI Blog – Checkpoint and restore Docker container with CRIU – https://circleci.com/blog/checkpoint-and-restore-docker-container-with-criu/

The post Quickly spinning up Docker Containers with baseline Oracle Database Setup – for performing automated tests appeared first on AMIS Oracle and Java Blog.

Beyond Chatbots: An AI Odyssey

OTN TechBlog - Wed, 2018-04-18 06:00

This month the Oracle Developer Community Podcast looks beyond chatbots to explore artificial intelligence -- its current capabilities, staggering potential, and the challenges along the way.

One of the most surprising comments to emerge from this discussion reveals how a character from a 50 year-old feature film factors into one of the most pressing AI challenges.

According to podcast panelist Phil Gordon, CEO and founder of Chatbox.com, the HAL 9000 computer at the center of Stanley Kubrick’s 1968 science fiction classic “2001: A Space Odyssey” is very much on the minds of those now rushing to deploy AI-based solutions. “They have unrealistic expectations of how well AI is going to work and how much it’s going to solve out of the box.” (And apparently they're willing to overlook HAL's abysmal safety record.)

It's easy to see how an AI capable of carrying on a conversation while managing and maintaining all the systems on a complex interplanetary spaceship would be an attractive idea for those who would like to apply similar technology to keeping a modern business on course. But the reality of today’s AI is a bit more modest (if less likely to refuse to open the pod bay doors).

In the podcast, Lyudmil Pelov, a cloud solutions architect with Oracle’s A-Team, explains that unrealistic expectations about AI have been fed by recent articles that portray AI as far more human-like than is currently possible.

“Most people don't understand what's behind the scenes,” says Lyudmil. “They cannot understand that the reality of the technology is very different. We have these algorithms that can beat humans at Go, but that doesn't necessarily mean we can find the cure for the next disease.” Those leaps forward are possible. “From a practical perspective, however, someone has to apply those algorithms,” Lyudmil says.

For podcast panelist Brendan Tierney, an Oracle ACE Director and principal consultant with Oralytics, accessing relevant information from within the organization poses another AI challenge.  “When it comes to customer expectations, there's an idea that it's a magic solution, that it will automatically find and discover and save lots of money automatically. That's not necessarily true.”  But behind that magic is a lot of science.

“The general term associated with this is, ‘data science,’” Brendan explains. “The science to it is that there is a certain amount of experimental work that needs to be done. We need to find out what works best with your data. If you're using a particular technique or algorithm or whatever, it might work for one company, but it might not work best for you. You've got to get your head around the idea that we are in a process of discovery and learning and we need to work out what's best for your data in your organization and processes.”

For panelist Joris Schellekens, software engineer at iText, a key issue is that of retractability. “If the AI predicts something or if your system makes some kind of decision, where does that come from? Why does it decide to do that? This is important to be able to explain expectations correctly, but also in case of failure—why does it fail and why does it decide to do this instead of the correct thing?”

Of course, these issues are only a sampling of what is discussed by the experienced developers in this podcast. So plug in and gain insight that just might help you navigate your own AI odyssey.

The Panelists Phil Gordon
CEO/founder of Chatbox.com

Twitter LinkedIn 

Lyudmil Pelov
Oracle A-Team Cloud Architect, Mobile, Cloud and Bot Technologies, Oracle

Twitter LinkedIn 

Joris Schellekens
Software Engineer, iText

Twitter LinkedIn

Brendan Tierney
Consultant, Architect, Author, Oralytics

Twitter LinkedIn 

Additional Resources Coming Soon
  • The Making of a Meet-Up

Never miss an episode! The Oracle Developer Community Podcast is available via:

Oracle DBAs and GDPR

Pakistan's First Oracle Blog - Wed, 2018-04-18 01:32
The General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) is a regulation by which the European Parliament, the Council of the European Union and the European Commission intend to strengthen and unify data protection for all individuals within the European Union (EU).

To bring Oracle database to align with GDPR directive, we have to encrypt all the databases and files on disk, aka encryption at rest (when data is stored). We also have to encrypt the database network traffic. 

The Transparent Data Encryption (TDE) feature allows sensitive data to be encrypted within the datafiles to prevent access to it from the operating system. 

You cannot encrypt an existing tablespace. So if you wish to encrypt existing data, you need to move them from unencrypted tablespaces to encrypted tablespaces. For doing this you can use any of following methods:

i) Oracle Data Pump utility.
ii) Commands like CREATE TABLE...AS SELECT...
iii) Move tables like ALTER TABLE...MOVE..  or rebuild indexes.
iv) Oracle Table Redefinition.

In order encrypt network traffic between client and server we have 2 options from Oracle:

i) Native Network Encryption for Database Connections
ii) Configuration of TCP/IP with SSL and TLS for Database Connections

Native Network Encryption is all about setting sqlnet.ora file and doesn't have the overhead of second option whereyou have to configure various network files at server and client and also have to obtain certificates and create wallet. In first option, there is possibility of not gurantee of encryption, whereas in second there is gurantee of encryption. 
Categories: DBA Blogs

AWS Pricing Made Easy By Simple Monthly Calculator

Pakistan's First Oracle Blog - Wed, 2018-04-18 01:26
With ever changing pricing model and services, its hard to keep track of AWS costing.

If you want to check how much would it cost to have a certain AWS service, tailored to your requirement then use the following Simply Monthly Calculator from AWS.

AWS Price Calculator.
Categories: DBA Blogs

Critical Patch Update for April 2018 Now Available

Steven Chan - Tue, 2018-04-17 22:22

The Critical Patch Update (CPU) for April 2018 was released on April 17, 2018. Oracle strongly recommends applying the patches as soon as possible.

The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents. 

Supported products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied.

The Critical Patch Update Advisory is available at the following location:

It is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches.

The next four Critical Patch Update release dates are:

  • July 17, 2018
  • October 16, 2018
  • January 15, 2019
  • April 16, 2019
References Related Articles
Categories: APPS Blogs

Skip Goldengate Replicat Transaction

Michael Dinh - Tue, 2018-04-17 19:22
Oracle GoldenGate Command Interpreter for Oracle
Version 17640173 OGGCORE_11.
Linux, x64, 64bit (optimized), Oracle 11g on Nov 19 2013 03:18:45

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.
ORA-02292: integrity constraint (OWNER.MARY_JOE_FK) violated - child record found (status = 2292). DELETE FROM "OWNER"."T_JOE"  WHERE "JOENUMMER" = :b0.


[gguser]$ grep -i discard rep1.prm
DISCARDFILE ./discard/rep1.discard append, MEGABYTES 1024

[gguser]$ grep -c "Skipping delete from OWNER.T_JOE" rep1.discard

[gguser]$ grep -A2 "Skipping delete from OWNER.T_JOE" ./discard/rep1.discard|head
Skipping delete from OWNER.T_JOE at seqno 4475 rba 87850906
Skipping delete from OWNER.T_JOE at seqno 4475 rba 87851339
Skipping delete from OWNER.T_JOE at seqno 4475 rba 87851735
[gguser@viz-cp-dc1-p11 oracle]$ grep -A2 "Skipping delete from OWNER.T_JOE" ./discard/rep1.discard|tail
JOENUMMER = 50093291
Skipping delete from OWNER.T_JOE at seqno 4475 rba 94033367
JOENUMMER = 50094681
Skipping delete from OWNER.T_JOE at seqno 4475 rba 94033767
JOENUMMER = 50094741

[gguser]$ grep rba rep1.discard|head -1
Aborting transaction on ./dirdat/nd beginning at seqno 4475 rba 87850906

[gguser]$ grep rba rep1.discard|tail -1
Skipping delete from OWNER.T_JOE at seqno 4475 rba 94033767

Logdump 23 >scanforendtrans
End of Transaction found at RBA 94033767 


1 select count(*) from
2 (
3 (select JOENUMMER from OWNER.T_JOE minus select JOENUMMER from OWNER.T_JOE@dblink)
4 union all
5 (select JOENUMMER from OWNER.T_JOE@dblink minus select JOENUMMER from OWNER.T_JOE)
6 )




GGATE@SQL> create table T_JOE_DEL as select JOENUMMER from OWNER.T_JOE minus select JOENUMMER from OWNER.T_JOE@dblink;


1 select * from (
2 select JOENUMMER from T_JOE_DEL order by 1 asc
3* ) where rownum <11


10 rows selected.


1 select * from (
2 select JOE from T_JOE_DEL order by 1 desc
3* ) where rownum <11


10 rows selected.



GGATE@SQL> delete from OWNER.T_JOE where JOENUMMER in (select JOENUMMER from T_JOE_DEL);

15273 rows deleted.

GGATE@SQL> commit;

Commit complete.


GGATE@SQL> select count(*) from OWNER.T_JOE;


GGATE@SQL> select count(*) from OWNER.T_JOE@dblink;




[gguser]$ grep SKIPTRANSACTION REP1*.rpt
rep1.rpt:2018-04-17 12:15:15  INFO    OGG-01370  User requested START SKIPTRANSACTION. The current transaction will be skipped. Transaction ID 22.30.1923599, position Seqno 4475, RBA 87850906.

[gguser]$ grep -i skip ggserr.log
2018-04-17 12:15:14  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (gguser): start replicat rep1 SKIPTRANSACTION.
2018-04-17 12:15:14  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host (START REPLICAT rep1 SKIPTRANSACTION).
2018-04-17 12:15:15  INFO    OGG-01370  Oracle GoldenGate Delivery for Oracle, rep1.prm:  User requested START SKIPTRANSACTION. The current transaction will be skipped. Transaction ID 22.30.1923599, position Seqno 4475, RBA 87850906.


Logdump 15 >open ./dirdat/nd004475
Current LogTrail is ./dirdat/nd004475 
Logdump 16 >detail on
Logdump 17 >fileheader detail
Logdump 18 >ghdr on
Logdump 19 >detail data
Logdump 20 >ggstoken detail
Logdump 21 >pos 87850906
Reading forward from RBA 87850906 
Logdump 22 >n
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)  
RecLength  :   310  (x0136)   IO Time    : 2018/04/17 10:47:16.475.512   
IOType     :     3  (x03)     OrigNode   :   255  (xff) 
TransInd   :     .  (x00)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :     167409       AuditPos   : 779280 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/04/17 10:47:16.475.512 Delete               Len   310 RBA 87850906 
Before Image:                                             Partition 4   G  b   

GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4148 6b55 4141 5441 4141 6264 7141 4159 0001 | AAAHkUAATAAAbdqAAY..  
TokenID x4c 'L' LOGCSN           Info x00  Length   10 
 3732 3833 3730 3834 3135                          | 7283708415  
TokenID x36 '6' TRANID           Info x00  Length   13 
 3232 2e33 302e 3139 3233 3539 39                  | 22.30.1923599  
Logdump 23 >scanforendtrans
End of Transaction found at RBA 94033767 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)  
RecLength  :   331  (x014b)   IO Time    : 2018/04/17 10:47:16.429.234   
IOType     :     3  (x03)     OrigNode   :   255  (xff) 
TransInd   :     .  (x02)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :     167409       AuditPos   : 13903264 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/04/17 10:47:16.429.234 Delete               Len   331 RBA 94033767 
Before Image:                                             Partition 4   G  e   
GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4148 6b55 4141 5741 4141 4e6c 6c41 4177 0001 | AAAHkUAAWAAANllAAw..  
Logdump 24 >n
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)  
RecLength  :   174  (x00ae)   IO Time    : 2018/04/17 10:47:24.429.491   
IOType     :    15  (x0f)     OrigNode   :   255  (xff) 
TransInd   :     .  (x00)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :     167409       AuditPos   : 13947088 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/04/17 10:47:24.429.491 FieldComp            Len   174 RBA 94034190 
Before Image:                                             Partition 4   G  b   
GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4148 6a59 4141 5441 4142 794b 4541 412f 0001 | AAAHjYAATAAByKEAA/..  
TokenID x4c 'L' LOGCSN           Info x00  Length   10 
 3732 3833 3730 3834 3538                          | 7283708458  
TokenID x36 '6' TRANID           Info x00  Length   13 
 3132 2e31 362e 3330 3031 3139 37                  | 12.16.3001197  
Logdump 25 >open ./dirdat/nd004475
Current LogTrail is ./dirdat/nd004475 
Logdump 26 >count
LogTrail ./dirdat/nd004475 has 92822 records 
Total Data Bytes          92730182 
  Avg Bytes/Record             999 
Delete                       20937 
Insert                        5405 
FieldComp                      724 
LargeObject                  65755 
Others                           1 
Before Images                21163 
After Images                 71658 

Average of 1589 Transactions 
    Bytes/Trans .....      61161 
    Records/Trans ...         58 
    Files/Trans .....          5 
Logdump 27 >detail on
Logdump 28 >filter inc filename OWNER.T_JOE
Logdump 29 >count
Scanned     10000 records, RBA   12734577, 2018/04/17 07:25:42.524.558 
Scanned     20000 records, RBA   25670230, 2018/04/17 08:00:11.480.213 
Scanned     30000 records, RBA   38698934, 2018/04/17 08:30:24.488.669 
Scanned     40000 records, RBA   51436567, 2018/04/17 08:59:11.452.549 
Scanned     50000 records, RBA   63868041, 2018/04/17 09:43:10.477.605 
Scanned     60000 records, RBA   76010927, 2018/04/17 10:14:59.472.122 
Scanned     70000 records, RBA   94264594, 2018/04/17 10:47:31.447.436 
LogTrail ./dirdat/nd004475 has 15296 records 
Total Data Bytes           4757365 
  Avg Bytes/Record             311 
Delete                       15296 
Before Images                15296 
Filtering matched        15296 records 
          suppressed     77526 records 

Average of 2 Transactions 
    Bytes/Trans .....    2745786 
    Records/Trans ...       7648 
    Files/Trans .....        110 

OWNER.T_JOE                                      Partition 4 
Total Data Bytes           4757365 
  Avg Bytes/Record             311 
Delete                       15296 
Before Images                15296 
Logdump 30 >


Best way to index uuid

Tom Kyte - Tue, 2018-04-17 16:06
Hello, What is the best way to index uud if I only do equal comparaison on it ? I gess that Hash index is better but I'm not sure. Regards Stephane GINER
Categories: DBA Blogs

Order by at runtime

Tom Kyte - Tue, 2018-04-17 16:06
Hello, we have some huge tables to query, and with order by clause (must be used) it takes a very long time for a query to be done. as I know that we can do the order by at run time using dynamic SQL, but my questions are: 1. do we have any o...
Categories: DBA Blogs

Automatic list partitioning

Tom Kyte - Tue, 2018-04-17 16:06
Hi Tom! I use Oracle 12c version. I have partitioned by list table. How can I change non automatic partitioning to automatic? Thank you!
Categories: DBA Blogs

how to generate .dsv files using SQL script?

Tom Kyte - Tue, 2018-04-17 16:06
we have around 100 table out of 200, in which there is a column date. what we want is, first we want to chagen the <b>NLS_date_format to DD-MON-YYYY HH12:MI:SS AM</b>(using script) then save the tables with date in a .DSV files. also n...
Categories: DBA Blogs

April 2018 Critical Patch Update Released

Oracle Security Team - Tue, 2018-04-17 14:57

Oracle today released the April 2018 Critical Patch Update.

This Critical Patch Update provided security updates for a wide range of product families, including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Oracle PeopleSoft, Oracle Industry Applications (Construction, Financial Services, Hospitality, Retail, Utilities), Oracle Java SE, and Oracle Systems Products Suite.

Approximately 35% of the security fixes provided by this Critical Patch Update are for non-Oracle Common Vulnerabilities and Exposures (CVEs): that is, security fixes for third-party products (e.g., open source components) that are included in traditional Oracle product distributions.  In many instances, the same CVE is listed multiple times in the Critical Patch Update Advisory, because a vulnerable common component (e.g., Apache) may be present in many different Oracle products.

Note that Oracle started releasing security updates in response to the Spectre (CVE-2017-5715 and CVE-2017-5753) and Meltdown (CVE-2017-5754) processor vulnerabilities with the January 2018 Critical Patch Update.  Customers should refer to this Advisory and the “Addendum to the January 2018 Critical Patch Update Advisory for Spectre and Meltdown” My Oracle Support note (Doc ID 2347948.1) for information about newly-released updates. At this point in time, Oracle has issued the corresponding security patches for Oracle Linux and Virtualization and Oracle Solaris on SPARC (SPARC 64-bit systems are not affected by Meltdown), and Oracle is working on producing the necessary updates for Solaris on x86 (noting the diversity of supported processors complicates the creation of the security patches related to these issues).

For more information about this Critical Patch Update, customers should refer to the Critical Patch Update Advisory and the executive summary published on My Oracle Support (Doc ID 2383583.1).   

Docker: How to build you own container with your own application

Dietrich Schroff - Tue, 2018-04-17 12:17
atThere are many tutorials out there, how to create a docker container with a apache webserver inside or a nginx.
But you can hardly find a manual how to build your own docker container without pulling everything from a foreign repository.
Why should you not pull everything from foreign repositories?

You should read this article or this:
But since each phase of the development pipeline is built at a different time, …
…you can’t be sure that the same version of each dependency in the development version also got into your production version.
That is a good point.

As considered in this article you can put some more constraints into your docker file: 
FROM ubuntu:14.04. or even
FROM ubuntu:0bf3461984f2fb18d237995e81faa657aff260a52a795367e6725f0617f7a56cAnd that is the point where i tell you: Create a process to build your own docker containers from scratch and distribute them with your own repository or copy them to all your docker nodes (s. here)

So here the steps to create your own container from a local directory (here ncweb):

# ls -l ncweb/
total 12
-rw-r--r--    1 root     root            90 Nov 26 10:06 Dockerfile
-rw-r--r--    1 root     root           255 Nov 26 11:29 index.html
-rw-r--r--    1 root     root             0 Nov 26 11:29 logfile
-rwxr--r--    1 root     root           176 Nov 26 11:29 ncweb.sh 
The Dockerfile contains the following:

# ls -l ncweb/
alpine:~# cat ncweb/Dockerfile
FROM alpine
RUN mkdir ncweb
ADD .  /tmp
ENTRYPOINT [ "/tmp/ncweb.sh" ]
Into this directory you have to put everything you need, e.g. a complete JDK or your binaries or ...

And then change into this directory and build your container:

ncweb# docker build -t ncweb:0.2 .
The distribution to other docker nodes can be done like this:

# docker save ncweb:0.3 | ssh docker load 
For more details read this posting.

Related posts:

Announcing GraalVM: Run Programs Faster Anywhere

OTN TechBlog - Tue, 2018-04-17 02:47

Current production virtual machines (VMs) provide high performance execution of programs only for a specific language or a very small set of languages. Compilation, memory management, and tooling are maintained separately for different languages, violating the ‘don’t repeat yourself’ (DRY) principle. This leads not only to a larger burden for the VM implementers, but also for developers due to inconsistent performance characteristics, tooling, and configuration. Furthermore, communication between programs written in different languages requires costly serialization and deserialization logic. Finally, high performance VMs are heavyweight processes with high memory footprint and difficult to embed.

Several years ago, to address these shortcomings, Oracle Labs started a new research project for exploring a novel architecture for virtual machines. Our vision was to create a single VM that would provide high performance for all programming languages, therefore facilitating communication between programs. This architecture would support unified language-agnostic tooling for better maintainability and its embeddability would make the VM ubiquitous across the stack.

To meet this goal, we have invented a new approach for building such a VM. After years of extensive research and development, we are now ready to present the first production-ready release.

Introducing GraalVM

Today, we are pleased to announce the 1.0 release of GraalVM, a universal virtual machine designed for a polyglot world.

GraalVM provides high performance for individual languages and interoperability with zero performance overhead for creating polyglot applications. Instead of converting data structures at language boundaries, GraalVM allows objects and arrays to be used directly by foreign languages.

Example scenarios include accessing functionality of a Java library from Node.js code, calling a Python statistical routine from Java, or using R to create a complex SVG plot from data managed by another language. With GraalVM, programmers are free to use whatever language they think is most productive to solve the current task.

GraalVM 1.0 allows you to run:

- JVM-based languages like Java, Scala, Groovy, or Kotlin
- JavaScript (including Node.js)
- LLVM bitcode (created from programs written in e.g. C, C++, or Rust)
- Experimental versions of Ruby, R, and Python

GraalVM can either run standalone, embedded as part of platforms like OpenJDK or Node.js, or even embedded inside databases such as MySQL or the Oracle RDBMS. Applications can be deployed flexibly across the stack via the standardized GraalVM execution environments. In the case of data processing engines, GraalVM directly exposes the data stored in custom formats to the running program without any conversion overhead.

For JVM-based languages, GraalVM offers a mechanism to create precompiled native images with instant start up and low memory footprint. The image generation process runs a static analysis to find any code reachable from the main Java method and then performs a full ahead-of-time (AOT) compilation. The resulting native binary contains the whole program in machine code form for immediate execution. It can be linked with other native programs and can optionally include the GraalVM compiler for complementary just-in-time (JIT) compilation support to run any GraalVM-based language with high performance.

A major advantage of the GraalVM ecosystem is language-agnostic tooling that is applicable in all GraalVM deployments. The core GraalVM installation provides a language-agnostic debugger, profiler, and heap viewer. We invite third-party tool developers and language developers to enrich the GraalVM ecosystem using the instrumentation API or the language-implementation API. We envision GraalVM as a language-level virtualization layer that allows leveraging tools and embeddings across all languages.

GraalVM in Production

Twitter is one of the companies deploying GraalVM in production already today for executing their Scala-based microservices. The aggressive optimizations of the GraalVM compiler reduces object allocations and improves overall execution speed. This results in fewer garbage collection pauses and less computing power necessary for running the platform. See this presentation from a Twitter JVM Engineer describing their experiences in detail and how they are using the GraalVM compiler to save money. In the current 1.0 release, we recommend JVM-based languages and JavaScript (including Node.js) for production use while R, Ruby, Python and LLVM-based languages are still experimental.

Getting Started

The binary of the GraalVM v1.0 (release candidate) Community Edition (CE) built from the GraalVM open source repository on GitHub is available here.

We are looking for feedback from the community for this release candidate. We welcome feedback in the form of GitHub issues or GitHub pull requests.

In addition to the GraalVM CE, we also provide the GraalVM v1.0 (release candidate) Enterprise Edition (EE) for better security, scalability and performance in production environments. GraalVM EE is available on Oracle Cloud Infrastructure and can be downloaded from the Oracle Technology Network for evaluation. For production use of GraalVM EE, please contact graalvm-enterprise_grp_ww@oracle.com.

Stay Connected

The latest up-to-date downloads and documentation can be found at www.graalvm.org. Follow our daily development, request enhancements, or report issues via our GitHub repository at www.github.com/oracle/graal. We encourage you to subscribe to these GraalVM mailing lists:

- graalvm-announce@oss.oracle.com
- graalvm-users@oss.oracle.com
- graalvm-dev@oss.oracle.com

We communicate via the @graalvm alias on Twitter and watch for any tweet or Stack Overflow question with the #GraalVM hash tag.


This first release is only the beginning. We are working on improving all aspects of GraalVM; in particular the support for Python, R and Ruby.

GraalVM is an open ecosystem and we encourage building your own languages or tools on top of it. We want to make GraalVM a collaborative project enabling standardized language execution and a rich set of language-agnostic tooling. Please find more at www.graalvm.org on how to:

- allow your own language to run on GraalVM
- build language-agnostic tools for GraalVM
- embed GraalVM in your own application

We look forward to building this next generation technology for a polyglot world together with you!

Julian Date Full Explanation

Tom Kyte - Mon, 2018-04-16 21:46
Hello, I'm fairly new, but I have been finding bits and pieces on Julian date conversion, but not a full explanation of the Julian date conversion? <b>I.E TO_NUMBER(TO_CHAR(SYSDATE, 'YYYYDDD'))-1900000</b> Firstly, the SYSDATE is using the T...
Categories: DBA Blogs

Help needed with match_recognize

Tom Kyte - Mon, 2018-04-16 21:46
Dear Mr. Tom, Thank you for all your help and time in supporting our requests. I have some issues with MATCH_RECOGNIZE Oracle Version - OS - REDHAT Linux <code>CREATE TABLE test_match_recognize(employment_id NUMBER (10, 0) NOT N...
Categories: DBA Blogs

How to remove multiple word occurance from an input string using oracle PL/SQL

Tom Kyte - Mon, 2018-04-16 21:46
Remove duplicate words from a address using oracle pl/sql: There are two types of addresses will be there, below is the example 1. '3 Mayers Court 3 Mayers Court' : where total no of words in address is even and either all words/combination of ...
Categories: DBA Blogs

merge and dbms_errlog behaviour with ORA-30926

Tom Kyte - Mon, 2018-04-16 21:46
Hi all, I have a merge statement that sometimes fails when the source table has duplicated merge keys. To save time I tried to use dbms_errlog package and let it save the coulript rows, without failing the statement itself. The error I get befor...
Categories: DBA Blogs

2018.pgconf.de, recap

Yann Neuhaus - Mon, 2018-04-16 11:43

Finally I am home from pgconf.de in Berlin at the beautiful Müggelsee. Beside meeting core PostreSQL people such Devrim and Bruce, Andreas and joining Jan again for great discussions and some beers, joking with Anja, being at the dbi services booth, discussing with people, kidding with Hans: was it worth the effort? Yes, it was, and here is why.


We had very interesting discussions at our booth, ranging from migrations to PostgreSQL, PostgreSQL training corporations and interest in our OpenDB appliance.

The opening session “Umdenken! 11 Gebote zum IT-Management” raised a question we do always ask our selfs as well: When you do HA how much complexity does the HA layer add? Maybe it is the HA layer that was causing the outage and that would not have happened without that? Reducing complexity is key to robust and reliable IT operations.

Listening to Bruce Momjian is always a joy: This time it was about PostgreSQL sharding. Much is already in place, some will come with PostgreSQL 11 and other stuff is being worked on for PostgreSQL 12 next year. Just check the slides which should be available for download from the website soon.

Most important: The increasing interest in PostgreSQL. We can see that at our customers, at conferences and in the interest in our blog posts about that topic. Sadly, when you have a booth, you are not able to listen to all the talks you would like to. This is the downside :(

So, mark your calendar: Next years date and location are already fixed: May 10, 2019, in Leipzip. I am sure we will have some updates to:



Cet article 2018.pgconf.de, recap est apparu en premier sur Blog dbi services.


Subscribe to Oracle FAQ aggregator