Feed aggregator

ORA-12547 TNS lost contact on SLES12 and oracle 11.2.0.x

Yann Neuhaus - Tue, 2018-02-27 08:16

On a client’s site, I have migrated successfully Oracle 12c databases from RedHat to SLES12 without any problem. I encountered a problem doing the same work with Oracle 11.2.0.3 and oracle 11.2.0.4 on SLES 12.

Once I have moved the database data files, redo logs, control files and spfile to the new server, when I try to startup the database, I receive the error message:

oracle@server:/u00/app/oracle/diag/ [db1] sq
SQL*Plus: Release 11.2.0.3.0 Production on Thu Feb 22 09:31:18 2018
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
SQL> startup
ERROR:
ORA-12547: TNS:lost contact

In the alert.log file, we receive the following error:

Exception [type: SIGSEGV, SI_KERNEL(general_protection)] [ADDR:0x0] [PC:0x7F09646224A0, __lll_unlock_elision()+48] [flags: 0x0, count: 1]
ORA-12547 : TNS lost contact

I remembered that some months ago, I had quite a similar problem while installing Oracle Clusterware on SLES12 and after a quick search on Metalink I discovered the Metalink note 2297117.1 explaining that glibc in SuSE 12 makes use of a Hardware Lock Elision (HLE) available in newer Intel Processors.

The solution is equivalent to the one purposed in Metalink node 2270947.1 for the Oracle Clusterware problem with SLES12:

We have to add a line in /etc/ld.so.conf:

/lib64/noelision

And then we have to create a symbolic link as follows:

ln -s /lib64/noelision/libpthread-xxx $ORACLE_HOME/lib/libpthread.so.0

Then we edit /etc/ld.so.conf and we add the necessary line

/lib64/noelision
/usr/local/lib64
/usr/local/lib
include /etc/ld.so.conf.d/*.conf

And we create a symbolic link:

ln -s /lib64/noelision/libpthread-2.22.so $ORACLE_HOME/lib/libpthread.so.0

We can check:

oracle@server:/u00/app/oracle/product/11.2.0.3/dbhome_1/bin/ [db1] ldd sqlplus
        linux-vdso.so.1 (0x00007ffdf8513000)
        libsqlplus.so => /u00/app/oracle/product/11.2.0.3/dbhome_1/lib/libsqlplus.so (0x00007fb948303000)
        libclntsh.so.11.1 => /u00/app/oracle/product/11.2.0.3/dbhome_1/lib/libclntsh.so.11.1 (0x00007fb945944000)
        libnnz11.so => /u00/app/oracle/product/11.2.0.3/dbhome_1/lib/libnnz11.so (0x00007fb945577000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007fb945357000)
        libm.so.6 => /lib64/libm.so.6 (0x00007fb945059000)
        libpthread.so.0 => /u00/app/oracle/product/11.2.0.3/dbhome_1/lib/libpthread.so.0 (0x00007fb944e3c000)
        libnsl.so.1 => /lib64/libnsl.so.1 (0x00007fb944c24000)
        libc.so.6 => /lib64/libc.so.6 (0x00007fb944880000)
        libaio.so.1 => /lib64/libaio.so.1 (0x00007fb94467e000)
        /lib64/ld-linux-x86-64.so.2 (0x000055c70f7fc000)

Normally a simple reconnection should have solved the problem, but I had some semaphores and memory segments blocked.

I had to run sysresv and ipcrm to completely solve my problem:=)

The first connection did not solve the problem:

oracle@server:/u00/app/oracle/ [db1] sq
SQL*Plus: Release 11.2.0.3.0 Production on Thu Feb 22 09:45:41 2018
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected.
SQL> startup
ERROR:
ORA-12547: TNS:lost contact

I had to use the sysresv utility which provides instance status for a specified ORACLE_SID. This utility offers the possibility to identify resources to be deleted, especially if the instance is detected to be dead.

In my case, typically my instance was crashed and memory segments or semaphores were always present.

So running sysresv:

oracle@server:/u00/app/oracle/  [db1] sysresv
IPC Resources for ORACLE_SID "db1" :
Shared Memory:
ID              KEY
1278378019      0x00000000
1278410788      0x00000000
1278443557      0x186a9d18
Semaphores:
ID              KEY
5931036         0x7e57b354
Unable to determine if Oracle instance alivefor sid "db1"

Sysresv detects the shared memory and semaphores id. To delete the memory segments or the semaphores segments, we use ipcrm -m or ipcrm -s:

In my case I removed the memory segments:

ipcrm -m 1278378019
ipcrm -m 1278410788
ipcrm -m 1278443557

Finally after performing the listed actions, I was able to start the database without any problem :=)

 

 

 

 

 

 

 

 

 

Cet article ORA-12547 TNS lost contact on SLES12 and oracle 11.2.0.x est apparu en premier sur Blog dbi services.

Full Export by schema or tablespace

Tom Kyte - Tue, 2018-02-27 00:06
Hello, I am making a full export with this command : <code>expdp system/systembase DIRECTORY=DATA_PUMP_DIR DUMPFILE=full.dmp LOGFILE=full.log FULL=y</code> My problem is that while doing the export, my disk fills up, and no more space is a...
Categories: DBA Blogs

Can I create index on composite primary key?

Tom Kyte - Tue, 2018-02-27 00:06
Hello Sir, I have one table Manager which has 4 columns.Out of which 3 column are composite primary key. create table manager(ID integer, Name varchar(20), Designation varchar(20), Salary integer); alter table manager add constraint t_pk prima...
Categories: DBA Blogs

EXPDP - tables only and only certain Schemas

Tom Kyte - Tue, 2018-02-27 00:06
Hi I want to do a datapupm export from one database, and then do a datapump import back into another database. I only want to export tables, and only those belonging to two users - say USER1, USER2 So I plan to do an EXPDP with the following arg...
Categories: DBA Blogs

Dynamic Views in cursor

Tom Kyte - Tue, 2018-02-27 00:06
Hi Tom, I have defined a cursor using two views in its query like below. Cursor pmt_cur(CV_txn_id in number) is SELECT sum(Amount),tgt_group_cd from payments_view,transactions_view where payments.tcd_id=pmt_tgt_id and txn_id=CV_txn_Id; I have ...
Categories: DBA Blogs

Detect records that are not in sequence

Tom Kyte - Tue, 2018-02-27 00:06
I need to find the employees that are not having event type(join(1)/retire (2)) in a location with date sequence. The combination of join and retire events in a location should occur first, before employee join another location. Please note that the ...
Categories: DBA Blogs

Oracle Database 18c Indexing Related New Features (New Angels of Promise)

Richard Foote - Mon, 2018-02-26 22:29
Although the recently released Oracle 18c Database is really just 12.2.0.2 under the covers, there are a few little features and enhancements that are of interest from an indexing perspective. These include: Memory Optimized Rowstore Scalable Sequences Oracle Text indexing enhancements, such as automatic background index maintenance and new optimize index options JSON Search Index […]
Categories: DBA Blogs

18c, Cloud First and Cloud Only features: think differently

Yann Neuhaus - Mon, 2018-02-26 14:48

Remember the times when the Oracle Software features were the same on all platforms? Where Oracle databases could be ported to any relevant platform? Where we were able to try any feature, freely, by downloading the latest release software? Now we need to think differently. Because:

  • The new software is released on Cloud first
  • The major new features will never be available on-premises
  • The Cloud here means the Oracle Cloud – not AWS, not Azure, not Google, not you local IaaS providers
  • Some new features are extended to on-premises for Oracle hardware only (Exadata, ODA)
  • All trial environments are paid services (but you can get free credits) but this may change with Oracle XE 18c

And you are concerned because if you start your new developments on the Oracle Cloud, or simply train yourself on new features, you may rely on features that you will never have on-premises. In my opinion, it makes sense for a startup, or a new project, to start development and early production on the Cloud. However, there will probably be a point where the cost optimization will involve on-premises servers, or IaaS, or different Cloud providers. Then the features you used may not be available.

Another concern is financial: when justifying to your CFO the cost of the 22% Support and Software Updates, you may list all the new features. But be careful. Most of the new features comes with additional options, or will not be available outside of the Oracle Cloud PaaS.

If you tried the Oracle 18c release on the Oracle Cloud, you may have seen some additional informations in the alert.log:

Capability Type : Network
capabilities requested : 1 detected : 0 Simulated : 0
Capability Type : Runtime Environment
capabilities requested : 400000FF detected : 4 Simulated : 0
Capability Type : Engineered Systems
capabilities requested : 3 detected : 0 Simulated : 0

So, it seems that Oracle is checking the capabilities of the platform to enable or not some features. When you are not on the right one, you may encounter this kind of error which is new in 18c:

[oracle@DBaaS18c ~]$ oerr ORA 12754
12754, 00000, "Feature %s is disabled due to missing capability %s."
// *Document: NO
// *Cause: This feature requires specific capabilities that were not present in the current database instance.
// *Action: Contact Oracle Support Services to enable this feature.

or maybe:

12755, 00000, "Feature %s is disabled due to unsupported capability."
// *Document: NO
// *Cause: This feature requires specific capabilities that were not supported
// in the current database instance.
// *Action: Contact Oracle Support Services to enable this feature.

For the moment, the on-premises binaries are available for Exadata only. But the Licensing documentation already gives you an idea. The following new features will not be available on-premises.

All the multitenant new features are for Oracle Cloud or Oracle Engineered systems only:

  • CDB Fleet Management
  • PDB Snapshot Carousel
  • Refreshable PDB switchover
  • Keystore for Each Pluggable Database

If you are on your own servers, or on one of the major cloud providers, you do not benefit from the latest software updates. Even if you pay each year 22% of your licenses cost, even on platforms where the core factor is maximum. You have support, and patches, but only a limited set of new features.

If you do not have the Active Data Guard option, you cannot benefit from most of the new features of the last releases. And buying this option can be expensive if you are on ULA (because you will buy it for all processors), or on non-Oracle Cloud (because of the core factor) and even there some features will not be available. The latest, Oracle Data Guard—Automatic Correction of Non-logged Blocks, is available on Oracle Cloud only, or Exadata/ODA. It is not a big problem as you can include this recovery after your nologging load, but it is important to know it.

Note that with this new release, some features also disappear. Not only deprecated. Not only desupported. But also removed. Oracle Change Data Capture has been desupported in 12c and if you look at 18c you will see that it has been removed. And it is the last version with Oracle Streams. As mentioned in the documentation, you need to buy Golden Gate.

This looks like bad news in 18c, but consider it as a new patchset on 12cR2. Remember that 12cR2 brought amazing features to all platforms and all editions, such as the online clone or move of pluggable databases. The important thing is to be informed and think differently as we used to when Oracle Databases were portable to all platforms. Be careful with features that will not be available on all platforms. Consider the costs correctly. And also look at all those features that are available to everybody and are probably not used enough. The best way is to design the application to use the database efficiently (processing data in the database, lowering the roundtrips and context switches) on a limited number of CPU cores. Then, all those options or cloud credits will not be as expensive as you may think. Nobody likes vendor lock-in, but it may be the most cost-efficient solution.

 

Cet article 18c, Cloud First and Cloud Only features: think differently est apparu en premier sur Blog dbi services.

Upgrading Oracle SOA Suite 11g to Oracle SOA Suite 12c in an Existing ISG configuration

Steven Chan - Mon, 2018-02-26 11:06

I am pleased to announce the certification of steps to upgrade Oracle SOA Suite 11g to Oracle SOA Suite 12c for an existing Oracle E-Business Suite Integrated SOA Gateway (ISG) configuration. In Oracle E-Business Suite Release 12.2, ISG’s SOAP service provider framework uses Oracle SOA Suite and its E-Business Suite Adapter.

If your Oracle E-Business Suite Integrated SOA Gateway R12.2 is already configured with Oracle SOA Suite 11g (11.1.1.9.0) for SOAP based web services and, if you plan to upgrade such Oracle SOA Suite to Oracle SOA Suite 12c (12.2.1.2), refer Section 4.2 of Installing Oracle E-Business Suite Integrated SOA Gateway Release 12.2 (MOS 1311068.1).

<<image>>

Major steps to upgrade Oracle SOA Suite for ISG include:

  1. Oracle Fusion Middleware Pre-upgrade tasks
  2. Upgrade tasks
  3. Post-upgrade tasks
  4. ISG Setup Verification

These steps apply for single node installation of Oracle SOA Suite as well as multiple node installation as per Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite.

References

Related Articles

 

Categories: APPS Blogs

Match_recognize

Jonathan Lewis - Mon, 2018-02-26 08:59

In the spirit of Cary Millsap’s comment: “The fastest way to do anything is to not do it at all”, here’s my take (possibly not an original one) on solving problems:

“The best time to solve a problem is before it has happened.”

I spend quite a lot of my “non-contact” time thinking about boundary cases, feature collisions, contention issues, and any other things that could go wrong when you start to implement real systems with (new) Oracle features. The benefit of doing this, of course, is that when I’m looking at a client’s system I can often solve problems because I recognise symptoms that I’ve previously created “in the lab”. The strange thing about this is that there have been times when I’ve pushed Oracle to a breaking point, documented it, and then dismissed the threat because “no one would do that in real life” only to find that someone has done it in real life.

All this is just a preamble to a demonstration of a threat with a terrific feature that is just beginning to gain greater acceptance as a solution to some interesting problems – and the demonstration is going to exaggerate the problem to a level that (probably) won’t appear in a production. The driving example appeared as a question on the OTN/ODC database forum:

“I need customers who have done a transaction in September but not in October.”

There are obviously many ways to address this type of requirement (my first thought was to use the MINUS operator), and a few questions you might ask before trying to address it, but the OP had supplied some data to play which consisted of just a few rows of a table with three columns and some data restricted to just one year, and one solution offered was a very simple query using match_recognize():


CREATE TABLE TEST_TABLE   
  ( T_ID NUMBER, -- trans-id  
    CUST_ID NUMBER,   
    TRANS_DT DATE  
  ) ;  
                  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (1,100,to_date('12-SEP-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (2,100,to_date('12-OCT-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (3,200,to_date('12-SEP-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (4,300,to_date('12-OCT-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (5,400,to_date('12-JAN-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (6,500,to_date('12-OCT-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (7,500,to_date('12-MAR-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (8,600,to_date('12-SEP-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (9,600,to_date('12-JUL-17','DD-MON-RR'));  

commit;

select * from test_table
match_recognize
(
  partition by cust_id
  order by trans_dt
  pattern( x+ y* $)
  define
    x as extract(month from trans_dt)  = 9,
    y as extract(month from trans_dt) != 10
);
 
   CUST_ID
----------
       200
       600
      

The obvious benefit of this solution over a solution involving a set-wise MINUS is that it need only scan the data set once (whereas the MINUS strategy will be scanning it twice with a select distinct in each scan) – but it’s a solution that is likely to be unfamiliar to many people and may need a little explanation.

The partition by cust_id order by trans_dt means we sort the data by those two columns, breaking on cust_id. Then for each cust_id we walk through the data looking for a pattern which is defined as: “one or more rows where the month is september followed by zero or more rows where the month is NOT october followed by the end of the set for the customer”. The SQL leaves many details to default so the result set is just the cust_id column and only one row per occurrence of the pattern (which, given the data set, can occur at most once per customer).

For a cust_id that shows a matching pattern the work we will have done is:

  • Walk through rows for Jan to Aug until we reach the first September – which is the start of pattern
  • Keep on walking through to the last of the Septembers – which is a partial match
  • One of
  • Walk through zero rows of November and December and reach the end of cust_id
  • Walk through one or more rows of November and/or December then reach the end of cust_id
  • Record the end of pattern by reporting one row
  • Move on to next cust_id

The excitement starts when we think about a cust_id that doesn’t have a matching pattern – and for that I’m going to generate a new, extreme, data set.


rem
rem     Script:         match_recognize_07.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2018
rem

create table t1
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        99                              cust_id,
        to_date('01-Sep-2017')          trans_dt,
        lpad(rownum,1000,'0')           padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid WordPress format issue
;

update t1
set
        trans_dt = to_date('01-Oct-2017','dd-mon-yyyy')
where
        rownum = 1
;

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

select  *
from    (
        select 
                t1.*,
                extract(year from trans_dt) yr, 
                extract(month from trans_dt) mth
        from
                t1
        )
match_recognize
(
        partition by cust_id
        order by trans_dt
        measures
                padding as t1_padding,
        pattern( x+  y*  $ )
        define
                x as mth = 9,
                y as mth != 10
);

I’ve moved the calculation of month number from the define clause into an in-line view purely to make the match_recognize() clause a little tidier.

I’ve created a table with just one customer with 100,000 transactions on 1st September 2017, then I’ve updated one row from September to October. Thanks to that one row Oracle is not going to be able to find the requested pattern. I’ve added a padding column of 1,000 characters to the table and included it in the measures that I want to select, so Oracle will have to sort roughly 100MB of data (100,000 rows at roughly 1KB per row) before it starts walking the data to find matches – and, though it’s not visible in the script, the workarea settings mean the session won’t be allowed to expand its PGA to accommodate the whole 100MB.

Test 1 – comment out the update and see how long it takes to produce a result: 0.67 seconds, and the padding value reported was the last one from the pattern.
Test 2 – put the update back in place and try again:

After running for 46 seconds with no result and interrupting the query these are some figures from a snapshot of the session stats:

Name                                                 Value
----                                                 -----
CPU used when call started                           3,662
DB time                                              3,711
user I/O wait time                                   1,538
consistent gets                                     14,300
physical reads direct                            1,298,939
physical read IO requests                          736,478
physical read bytes                         10,640,908,288      
physical writes                                     25,228
physical writes direct                              25,228
physical reads direct temporary tablespace       1,298,939
physical writes direct temporary tablespace         25,228
table scan rows gotten                             100,000
table scan blocks gotten                            14,286

  • I’ve scanned a table of 14,286 blocks to find 100,000 rows.
  • I’ve sorted and spilled to disc, using roughly 25,000 blocks of direct path writes and reads to do the sort.
  • Then I’ve spend the rest of the time burning up CPU and reading 1.27 million blocks from the temporary tablespace trying to find a match

The way that basic pattern matching works on a match failure is to go back to the row after the one where the current match attempt started, and begin all over again. So in this example, after dumping 100MB of Septembers to temp Oracle started at row 1, read 999,999 rows, then found the October that failed the match; so it went to row 2, read 999,998 rows, then found the October that failed the match; so it went to row 3 and so on. Every time it went back to (nearly) the beginning it had to start re-reading that 100,000 rows from temp because the session wasn’t allowed to keep the whole 100MB in memory.

You need to avoid defining a pattern that has to scan large volumes of data to identify a single occurrence of the pattern if the matching process is likely to fail. Even if you can keep the appropriate volume of data in memory for the entire time and avoid a catastrophic volume of reads from the temporary tablespace you can still see a huge amount of CPU being used to process the data – when I reduced the table from 100,000 rows to 10,000 rows it still took me 99 CPU seconds to run the query.

tl;dr

Match_recognize() is a terrific tool, but you must remember two important details about the default behaviour when you think about using it:

  • You will sort a volume of data that is the number of input rows multiplied but the total length of the measures/partition output.
  • If you have a long sequence of rows that ends up failing to match a pattern Oracle goes back to the row after the start of the previous match attempt.

With the usual proviso that “large”, “small” etc. are all relative: keep the data volume small, and try to define patterns that will be short  runs of rows.

Do note, however, that I engineered this example to produce a catastrophe. There are many non-default actions you can choose to minimise the workload you’re likely to produce with match_recognize(), and if you just spare a little time to think about worst case events you probably won’t need to face a scenario like this in a real production environment.

See also:

Part 6 (which includes a list of earlier installments) of an introductory series to match_recognize() by Keith Laker.

A pdf file of Keith Laker’s presentation on match_recognize(), including some technical implementation details.

 

Oracle Systems Partner Webcast-Series: Oracle Cloud at Customer - Deliver the Oracle Public ...

      Oracle Cloud at Customer: Deliver the Oracle Public Cloud On...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle JET Web Applications – Automating Build, Package and Deploy (to Application Container Cloud) using a Docker Container

Amis Blog - Mon, 2018-02-26 07:06

The essential message of this article is the automation for Oracle JET application of the flow from source code commit to a running application on Oracle Application Container Cloud, as shown in this picture:

image

I will describe the inside of the “black box” (actually light blue in this picture) where the build, package and deploy are done for an Oracle JET application.

The outline of the approach: a Docker Container is started in response to the code commit. This container contains all tooling that is required to perform the necessary actions including the scripts to actually run those actions. When the application has been deployed (or the resulting package is stored in an artifact repository) the container can be stopped. This approach is very clean – intermediate products that are created during the build process simply vanish along wih the container. A fresh container is started for the next iteration.

Note: the end to end build and deploy flow takes about 2 to 3 minutes on my environment. That obviously would be horrible for a simple developer round trip, but is actually quite acceptable for this type of ‘formal’ release to the shared cloud environment. This approach and this article are heavily inspired by this article (Deploy your apps to Oracle Cloud using PaaS Service Manager CLI on Docker) on Medium by Abhishek Gupta (who writes many very valuable articles, primarily around microservices and Oracle PaaS services such as Application Container Cloud).

Note: this article focuses on final deployment of the JET application to Application Container Cloud. It would however be quite simple to modify (in fact to simplify)the build container to not deploy the final ZIP file to Application Container Cloud, but instead push the file to an artifact repository or deploy to some other type of runtime platform. It would not be very hard to take the ZIP file and create a fresh Docker Container with that file that can be deployed on Kubernetes Cluster or any Docker runtime such as Oracle Container Cloud.

The sources – including a sample JET Application – are in this GitHub repo: https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequel .

The steps I describe in this article are:

  • preparation of the Docker Container that will do the build-package-deploy actions
  • preparation of the Oracle JET application – to be turned from a locally run, developer only client side web application into a stand-alone runnable enterprise web app with server side platform (Node with Express)
  • creation of the build script that will run inside the container and orchestrate the actions by the available tools to take the source all the way to the cloud
  • putting it all together

 

1. Preparation of the Docker Container that will do the build-package-deploy actions

The first step is the composition of the Docker Container. For this step, I have made good use of Abhishek’s article and the dockerfile he proposes in that article. I complemented Abhishek’s Dockerfiles with the tooling required for building Oracle JET applications.

A visual presentation of what the Docker Container will contain – and the steps made to put it together – is shown below:

image

Note: it is fun to bake Docker Container Images completely through a Docker file – and it is certainly convenient to share the instructions for creating a Docker Container image in the form of a Docker file. However, when the steps are complex to automated through a Docker file, there is a simple alternative: build as much of the container as you can through a Docker file. Then run the container and complete it through manual steps. Finally, when the container does what you need it to do, you can commit the state of the container as your reusable container image. And perhaps at this point, you can try to extend the Docker file with some of the manual steps, if you feel that maintaining the image will be a frequently recurring task.

The Docker build file that I finally put together is included below. The key steps:

  • the container is based on the “python:3.6.2-alpine3.6” image; this is done mainly because the PSM (Oracle PaaS Service Manager command line tool requires a Python runtime environment)
  • the apk package manager for Alpine Linux is used several times to add required packages to the image; it adds curl, zip, nodejs, nodejs-npm, bash, git and openssh
  • download and install the Oracle PSM command line tool (a Python application)
  • set up PSM for the target identity domain and user
  • install the Oracle JET Command Line tool that will be used for building the JET web application
  • copy the script build-app.sh that will be executed to run the end-to-end build-package-deploy flow

 

# extended from https://medium.com/oracledevs/quick-start-docker-ized-paas-service-manager-cli-f54eaf4ebcc7
# added npm, ojet-cli and git

FROM python:3.6.2-alpine3.6

ARG USERNAME
ARG PASSWORD
ARG IDENTITY_DOMAIN
ARG PSM_USERNAME
ARG PSM_PASSWORD
ARG PSM_REGION
ARG PSM_OUTPUT


WORKDIR "/oracle-cloud-psm-cli/"

RUN apk add --update curl && \
    rm -rf /var/cache/apk/*

RUN curl -X GET -u $USERNAME:$PASSWORD -H X-ID-TENANT-NAME:$IDENTITY_DOMAIN https://psm.us.oraclecloud.com/paas/core/api/v1.1/cli/$IDENTITY_DOMAIN/client -o psmcli.zip && \
	pip3 install -U psmcli.zip 

COPY psm-setup-payload.json
RUN psm setup -c psm-setup-payload.json

RUN apk add --update nodejs nodejs-npm
RUN apk add --update zip

RUN npm install -g @oracle/ojet-cli

RUN apk update && apk upgrade &&  apk add --no-cache bash git openssh

COPY build-app.sh .

CMD ["/bin/sh"]

Use this command to build the container:

docker build –build-arg USERNAME=”your ACC cloud username” –build-arg PASSWORD=”the ACC cloud password” –build-arg IDENTITY_DOMAIN=”your identity domain” –build-arg PSM_REGION=”us” –build-arg PSM_OUTPUT=”json” -t psm-cli .

assuming that this command is run in the directory where the docker file is located.

This will create a container and tag it as image psm-cli. When this command completes, you can find the container image by running “docker images”. Subsequently, you can run a container based on the image: “docker run –rm -it psm-cli”

     

    2. Preparation of the Oracle JET application

    When developing a JET (4.x) application, we typically use the Oracle JET CLI – the command line tool that helps us to quickstart a new application, create composite components, serve the application locally as we are developing it to a browser with instant update of any file changes. The JET CLI is also used to build the application for release. The result of this step is the complete set of files needed to run the JET application in the browser. In order to actually offer the JET application to end users, it has to be served from a ‘web serving’ platform component – such as nginx or a backend in Python, Java or Node. Frequently, the JET application will require some server side facilities that the backend that serves the static JET application resources can also provide. For that reason, I select a JET serving backend that I can easily leverage for these serverside facilities; for me, this is currently Node.

    In order to create a self running JET application for the JET application built in the pipeline discussed in this article, I have added a simple Node & Express backend.

    I have used npm to create a new Node application (npm init jet-on-node). I have next created directory bin and the file www. This file is main entrypoint into the node application that serves the JET application; it delegates most work to module app that is loaded from file app.js in the root of this Node application, path /jet-on-node .

     

    SNAGHTML8be0161

    All static resources that the browser can access (including the JET application) go into the folder /jet-on-node/public. Module app defines – through Express – that requests for public resources (requests not handled by one of the URL path handlers) are taken care – by serving resources from the directory /public. Module app can handle other HTTP requests – for example from the JET application – and it could also implement the backend for Server Sent Events or WebSockets. Currently is handles the REST GET request to path “/about” that returns some key data for the application:

    SNAGHTML8d17d8a

    The dependencies for the jet-on-node application are defined in package.json during the build process of the final application, we will use “npm install” to add the server side required node modules.

    At this point, we have extended our code base with a simple landing platform for the JET application that can serve the application at runtime. All that remains is to take all content under the /web directory and copy it to the jet-on-node/public folder. Then we can run the application using “npm start” in directory jet-on-node. This will execute the start script in file package.json – which is defined as “node ./bin/www”.

     

    3. Creation of the build script that will run inside the container and
    orchestrate the actions

    The JET build container is available. The JET application is available from a Git repository (in my example in GitHub). A number of steps are now required to go to a running application on Application Container Cloud. The first steps are shown below:

     

    image

    1. Clone the Git repo that contains the JET application (or pull the latest sources or a specific tag)

    2. Install all modules required by the JET application – by running npm install

    3. Use the Oracle JET command line utility to build the application for release: ojet build –release

    After this step, all run time artifacts – including the JET libraries – are in the /web directory. These next steps turn these artifacts into a running application:

    4. Copy the contents of /web to /jet-on-node/public

    5. Install the modules required for the server side Node application by running npm install in directory jet-on-node

    6. Create a single zip file for all artifacts in the /jet-node directory – that includes both the JET application and its server side backend Node application. This zip-file is the release artifact for the JET application. As such, it can be pushed to an artifact repository or deployed to some other platform.

    7. Engage psm command line interface (Oracle PaaS Service Manager CLI) to perform deployment of the zip file to the Application Container Cloud for which psm already as configured during the creation of the build container.

    Note: the files manifest.json and deployment.json in the root of jet-on-node provide instructions to PSM and Application Container Cloud regarding the run time settings for this application – including the runtime version of Node, the command for starting the application, the runtime memory per instance and the number of instances as well as the values of environment variables to be passed to the application.

    image

    The shell-script build-app.sh (you may have to explicitly make this script executable, using “chmod u+x build-app.sh”) performs the steps described above (although perhaps not in the optimal way – feel free to fine tune and improve and let me know about it).

    #git clone https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequel
    # cd webshop-portal-soaring-through-the-cloud-native-sequel
    
    git pull
    wait
    
    npm install
    wait
    ojet build --release
    wait
    cp -a ./web/. ./jet-on-node/public
    wait
    cd jet-on-node
    wait
    npm install
    wait
    zip -r webshop.zip .
    wait
    cd /oracle-cloud-psm-cli/webshop-portal-soaring-through-the-cloud-native-sequel/jet-on-node
    
    psm accs push -n SoaringWebshopPortal -r node -s hourly -d deployment.json -p webshop.zip
    

    The end-to-end flow through the build container during the release of the latest version of the JET application can now be depicted like this:

    image

     

    4. Putting it all together

    I will now try to demonstrate how this all works together. In order to do so, I will go through these steps – and illustrate them with screenshots:

    • make a change in the JET application
    • commit and push the change (to GitHub)
    • run the Docker build container psm-cli
    • run the script build-app.sh
    • wait for about three minutes (check the output in the build container and the application status in the ACC console)
    • access the updated Web Application

    The starting point for the application:

    SNAGHTML8fbfc34

    1. Make a change

    The word Shopping Basket – next to the icon – seems superfluous, I will remove that. And I will increase the version number, from v1.2.0 to v1.2.1.

    image

     

      2. commit and push the change (to GitHub)

      image

      The change is accepted in GitHub:

      image

       

      3. Run the Docker build container psm-cli

      Run the Docker Quickstart Terminal (I am on Windows) and perform: “docker run –rm -it psm-cli”

      image

       

      At this point, I lack a little bit of automation. The manual step I need to take (just the first time round) is to clone the JET application’s Git repository:

      git clone https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequelCloning

      and to move to the created directory

      cd webshop-portal-soaring-through-the-cloud-native-sequel/

      and to make the file build-app.sh executable:

      chmod u+x build-app.sh

      image

      Note: As long the container keeps running, I only have to run “git pull” and “./build-app.sh” for every next update to the JET application. The next step would be to configure a web hook that is triggered by the relevant commit in the GitHub repository.

       

      4. run the script build-app.sh

      ./build-app.sh

      image

      wait for about three minutes (check the output in the build container

      image

      SNAGHTML91173de

      and the application status in the ACC console)

      SNAGHTML90fc3cd

      SNAGHTML90fe186

       

      5. access the updated Web Application

      image

      As you can see, after committing and pushing the change, I only had to run a simple command line command to get the application fully rebuilt and redeployed. After stopping the Docker container, no traces remain of the build process. And I can easily share the container image with my team members to build the same application or update to also build other or additional JET applications.

       

      Resources

      The inspirational article by Abhishek Gupta: https://medium.com/oracledevs/quick-start-docker-ized-paas-service-manager-cli-f54eaf4ebcc7

      The sources – including a sample JET Application – are in this GitHub repo: https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequel .

      Oracle JET Command Line Interface: https://github.com/oracle/ojet-cli

      Docs on the Oracle PSM (PaaS Service Manager) CLI: https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/abouit-paas-service-manager-command-line-interface.html

      Node & Express Tutorial Part 2: Creating a skeleton website: https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/skeleton_website

      Serving Public Files with Express – https://expressjs.com/en/starter/static-files.html

      Documentation for Oracle Application Container Cloud: https://docs.oracle.com/en/cloud/paas/app-container-cloud/dvcjv/getting-started-oracle-application-container-cloud-service.html

       

        The post Oracle JET Web Applications – Automating Build, Package and Deploy (to Application Container Cloud) using a Docker Container appeared first on AMIS Oracle and Java Blog.

        CSPs Improve Call Processing Performance 30% with Latest Release of Oracle Communications Session Border Controller

        Oracle Press Releases - Mon, 2018-02-26 07:00
        Press Release
        CSPs Improve Call Processing Performance 30% with Latest Release of Oracle Communications Session Border Controller New innovations in reliability, performance and serviceability drive next-generation of industry’s leading SBC platform and Acme Packet 6350

        Redwood Shores, Calif.—Feb 26, 2018

        Delivering new innovations in one of the world’s most widely deployed SBC offering, Oracle today announced the next generation of Oracle Communications Session Border Controller (SBC) and Acme Packet Platform . With these enhancements, Communications Service Providers (CSPs) can benefit from new industry-leading VoLTE subscriber density, and significantly improved call-processing performance, while achieving new levels of security and reliability, with a reduced hardware footprint and lower costs.

        “CSPs are in the midst of a massive digital transformation. But while they move to diversify revenue streams and explore adjacent markets, it’s critical that they maintain stellar performance across their core services,” said Doug Suriano, group vice president, Oracle Communications. “These recent enhancements underscore our commitment to helping our CSP customers innovate, while also protecting their network core with the industry’s top performing, most cost-effective and reliable SBC solutions.”

        Oracle Communications SBCs enables trusted, first-class communications across IP network access borders and IP interconnect borders, including fixed line, mobile (VoLTE), and over-the-top (OTT) services. Based on Acme Packet OS, Oracle Communications SBC operates on Oracle's range of purpose-built hardware platforms or general-purpose servers to deliver a unique combination of performance, capacity, high availability, and manageability. With the offering, CSPs can manage critical requirements for security, interoperability, reliability and quality, regulatory compliance, and revenue/cost optimization.

        Industry Leading Performance and Serviceability

        Whether it’s standard voice calls, video conferencing, instant messaging, or OTT, Oracle Communications SBC is the backbone for delivering trusted communications. With the new innovations in Oracle Communications SBC release S-Cz8.0.0, CSPs can enhance call-processing performance by up to 30 percent—delivering an even better experience to their customers. 

        The new offering now runs as a single software image that supports both Acme Packet platforms as well as virtualized deployments, enabling easier, seamless adoption of virtualization. New serviceability features also enable CSPs to quickly self-diagnose network and/or system issues, increasing network reliability and shortening time-to-repair. Finally, release S-Cz8.0.0 contains many enhancements increasing network border security, and dramatically increases the capacity of virtualized deployments.

        Acme Packet 6350—The Leading Platform for VoLTE Deployments

        Oracle introduces the latest and highest performing addition to the Acme Packet platform family, the Acme Packet 6350. Already deployed in production networks of leading carriers today, the Acme Packet 6350 is a compact 3U platform that increases subscriber density threefold, while also reducing CSPs hardware footprint and enabling reduced costs. This makes the platform especially relevant for VoLTE access deployments, where Oracle is an industry leader in VoLTE subscriber density under real-world conditions.

        Additional benefits and features of the Acme Packet 6350 include:

        • 8 Core CPU for even greater processing power
        • 48 GB system memory providing added performance and incredible subscriber density
        • High-density transcoding
        • Field proven architecture, with a high capacity in a small (3RU) footprint
        • Leverages many Acme Packet 6300 components, including the NIU, TCU, and SSM3

        Oracle Communications provides the integrated communications and cloud solutions that enable users to accelerate their digital transformation journey—from customer experience to digital business to network evolution. See Oracle Communications SBC in action at Mobile World Congress, Barcelona, February 26–March 1, 2018, Hall 3, Booth 3B30.

        Contact Info
        Katie Barron
        Oracle
        +1.202.904.1138
        katie.barron@oracle.com
        Raleigh Miller
        Burson-Marsteller
        +1.202.530.4554
        raleigh.miller@bm.com
        About Oracle

        OThe Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

        Trademarks

        Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

        Talk to a Press Contact

        Katie Barron

        • +1.202.904.1138

        Raleigh Miller

        • +1.202.530.4554

        A free persistent Google Cloud service with Oracle XE

        Yann Neuhaus - Mon, 2018-02-26 01:40

        In a previous post I’ve listed several free online services which run an Oracle XE so that you can test your SQL easily. You may want use Oracle XE further, with full access to the database and its host, and still from a web browser. You probably have a Google account. Then you also have a Virtual Machine on the Google Cloud (0.5 vCPU / 1.70 GB RAM boostable to 1 vCPU / 3.75 GB) and 5 GB of persistent storage (as long as you used it in the 120 previous days). Just try this Google Cloud Shell: https://console.cloud.google.com/cloudshell.
        In this post, I explain how to install Oracle XE there.

        First, you need to download Oracle XE. You do that on your laptop to upload it to the Google Cloud Shell. For legal reason, there is no automated way to download it with wget because you have to manually accept the OTN License Term: http://www.oracle.com/technetwork/database/database-technologies/express-edition/downloads/index.html and choose ‘Oracle Database Express Edition 11g Release 2 for Linux x64′

        You can try to upload it to the Cloud Shell directly (menu on top right – upload file) but I had problems with the size of the file, so I’ve split it into two files:

        split -b 150M oracle-xe-11.2.0-1.0.x86_64.rpm.zip

        You should have ‘split’ even on Windows (Ubuntu Bash Shell) but you can also use any tool. 7-zip can do that.

        I uploaded the two files:
        CaptureG0003

        Now on the Google Cloud shell, concatenate the files back to the .zip:

        franck_pachot@cloudshell:~$ cat xa* > oracle-xe-11.2.0-1.0.x86_64.rpm.zip

        Unzip it:

        franck_pachot@cloudshell:~$ unzip oracle-xe-11.2.0-1.0.x86_64.rpm.zip
        Archive: ora.zip
        creating: Disk1/
        creating: Disk1/upgrade/
        inflating: Disk1/upgrade/gen_inst.sql
        creating: Disk1/response/
        inflating: Disk1/response/xe.rsp
        inflating: Disk1/oracle-xe-11.2.0-1.0.x86_64.rpm

        This .zip contains a .rpm but we are on Debian in the Google Cloud Shell. In addition to that, I’ll not follow the standard installation of Oracle XE because only my $HOME filesystem is persistent, so I want everything there. I need rpm2cpio to extract from the .rpm, and I’ll need libaio1 to install Oracle:

        franck_pachot@cloudshell:~$ sudo apt-get -y install rpm2cpio libaio1

        Here is the extraction:

        franck_pachot@cloudshell:~$ rpm2cpio Disk1/oracle-xe-11.2.0-1.0.x86_64.rpm | cpio -idmv

        This extracted to u01, etc and usr in my $HOME directory and I’ll leave the Oracle Home there.
        I can remove the intermediate files:

        franck_pachot@cloudshell:~$ rm -f xa? oracle-xe-11.2.0-1.0.x86_64.rpm.zip Disk1

        The Oracle XE deployment contains a ‘createdb.sh’ which will create the XE database. You don’t have dbca here, you don’t have templates. Oracle XE is build to be small.
        Just set ORACLE_HOME, PATH, LD_LIBRARY_PATH and run createdb.sh

        export ORACLE_HOME=$HOME/u01/app/oracle/product/11.2.0/xe
        echo "$PATH" | grep "$ORACLE_HOME" || export PATH=$PATH:$ORACLE_HOME/bin
        echo "$LD_LIBRARY_PATH" | grep "$ORACLE_HOME" || export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
        createdb.sh

        This takes time: create database, catalog, catproc… and the you have your database

        The listener is not started. We need to create the directory for the log, and to define listener.ora to listen on default port:

        mkdir -p ./u01/app/oracle/product/11.2.0/xe/network/log
        echo "LISTENER=(DESCRIPTION_LIST=(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))))" > ./u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
        lsnrctl start
        export ORACLE_SID=XE
        sqlplus sys/oracle as sysdba <<<'alter system register;'

        You should see the XE service registered here:

        franck_pachot@cloudshell:~$ lsnrctl status
         
        LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 25-FEB-2018 23:01:40
         
        Copyright (c) 1991, 2011, Oracle. All rights reserved.
         
        Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=0.0.0.0)(PORT=1521)))
        STATUS of the LISTENER
        ------------------------
        Alias LISTENER
        Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
        Start Date 25-FEB-2018 23:00:01
        Uptime 0 days 0 hr. 1 min. 38 sec
        Trace Level off
        Security ON: Local OS Authentication
        SNMP OFF
        Listener Parameter File /home/frank_pachot/u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
        Listener Log File /home/frank_pachot/u01/app/oracle/product/11.2.0/xe/network/log/listener.log
        Listening Endpoints Summary...
        (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
        (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=8080))(Presentation=HTTP)(Session=RAW))
        Services Summary...
        Service "XE" has 1 instance(s).
        Instance "XE", status READY, has 1 handler(s) for this service...
        Service "XEXDB" has 1 instance(s).
        Instance "XE", status READY, has 1 handler(s) for this service...
        The command completed successfully

        Note that you cannot access your Google Cloud shell from outside, and then you can connect locally. But having a listener and connecting through services is always a good idea.

        If your session is inactive, you may lose the connection and even have the VM stopped. But your $HOME will still be there when you restart, so you can set the .profile to set the correct environment and start the listener and database if not already running:
        cat >> ~/.profile<<'END'
        export ORACLE_HOME=$HOME/u01/app/oracle/product/11.2.0/xe
        echo "$PATH" | grep "$ORACLE_HOME" || export PATH=$PATH:$ORACLE_HOME/bin
        echo "$LD_LIBRARY_PATH" | grep "$ORACLE_HOME" || export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
        export ORACLE_SID=XE
        ps -edf | grep [t]nslsnr || lsnrctl start
        ps -edf | grep [s]mon_XE || sqlplus sys/oracle as sysdba <<< startup
        END

        I don’t use /etc/oratab here because it is outside of the persistent area.

        We can not connect ‘/ as sysdba’ because we are not in the ‘dba’ group. I don’t think we can change this in Oracle XE. Of course, we can sudo to root and add the group, but that will not be persistent. However, no need for it. The password for SYS is “oracle” and you can create all the users you want. The database, being stored under $HOME, is persistent.

        Here are my datafiles:

        franck_pachot@cloudshell:~$ rman target sys/oracle
         
        Recovery Manager: Release 11.2.0.2.0 - Production on Sun Feb 25 21:28:00 2018
         
        Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
         
        connected to target database: XE (DBID=2850165315)
         
        RMAN> report schema;
         
        using target database control file instead of recovery catalog
        Report of database schema for database with db_unique_name XE
         
        List of Permanent Datafiles
        ===========================
        File Size(MB) Tablespace RB segs Datafile Name
        ---- -------- -------------------- ------- ------------------------
        1 280 SYSTEM *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/system.dbf
        2 190 SYSAUX *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/sysaux.dbf
        3 235 UNDOTBS1 *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/undotbs1.dbf
        4 100 USERS *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/users.dbf
         
        List of Temporary Files
        =======================
        File Size(MB) Tablespace Maxsize(MB) Tempfile Name
        ---- -------- -------------------- ----------- --------------------
        1 20 TEMP 500 /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/temp.dbf

        You find the alert.log under $ORACLE_HOME/dbs (as all the database files):

        franck_pachot@cloudshell:~$ tail $HOME/u01/app/oracle/product/11.2.0/xe/log/diag/rdbms/xe/XE/trace/alert_XE.logThread 1 advanced to log sequence 17 (LGWR switch)
        Current log# 2 seq# 17 mem# 0: /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/log2.dbf
        Sun Feb 25 22:01:05 2018
        Shared IO Pool defaulting to 44MB. Trying to get it from Buffer Cache for process 2875.
        Sun Feb 25 22:09:38 2018
        Thread 1 advanced to log sequence 18 (LGWR switch)
        Current log# 3 seq# 18 mem# 0: /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/log3.dbf
        Sun Feb 25 22:09:43 2018
        SERVER COMPONENT id=UTLRP_BGN: timestamp=2018-02-25 22:09:43
        SERVER COMPONENT id=UTLRP_END: timestamp=2018-02-25 22:09:50

        The limitations and features of the Google Cloud Shell are documented here: https://cloud.google.com/shell/docs/features. In addition to the command line (through ‘tmux’ which allows to split the screen in different panes) you have a file editor in the browser. You can also install Apex as you have browser access to port 8080 in https (icon on top right just before the menu).

        The major limitation here comes from Oracle XE which is an old version (11.2.0.2) but this year should come Oracle XE 18c with the latest features. Oracle XE 18c may also come with EM Express and Google Cloud Shell gives access to https. I just hope that there will be a small image for Oracle XE 18c as we have only 5GB here. Maybe a docker container will be easier then, with only the database in an external volume under $HOME. We will see, but in the meanwhile, there’s already a lot we can do with Oracle XE. You can play with Backup/Recovery scenarios and you will always be able to re-create the database by running the createdb.sh again.

        Added 26-FEB-2018

        As I said that this can be a good lab to practice backup/recovery scenarios, you should run in archive log mode:

        sqlplus sys/oracle as sysdba <<END
        shutdown immediate;
        startup mount;
        alter database archivelog;
        alter database open;
        END

        Then to avoid to fill-in the recovery area, you can backup the database and archived logs frequently. You don’t need to put the backups on the persistent storage as it is a lab.
        I suggest to put the following at the end of the .profile:

        rman target sys/oracle > last_backup.log <<<"set echo on; configure channel device type disk format '/var/tmp/%U'; configure backup optimization on; configure controlfile autobackup on; crosscheck backup; delete noprompt expired backup; backup database plus archivelog; delete noprompt obsolete;" &

        This will run a backup to /var/tmp when you connect, delete obsolete backups, and expired ones (as they will be removed if the machine is reset after long inactivity).

         

        Cet article A free persistent Google Cloud service with Oracle XE est apparu en premier sur Blog dbi services.

        Ubuntu Kernel: Adding a new menuentry as default before menuentry 'Ubuntu' (grub)

        Dietrich Schroff - Sun, 2018-02-25 15:27
        After the problem with s2disk/hibernate on my Ubuntu 17.10 i wanted to change the grub boot menu:
        I wanted to add my new default kernel at the beginning  - before the 'Ubuntu' menuentry.





        Adding the menuentry inside /boot/grub/grub.cfg does not work, because after the first kernel update the command update-grub will overwrite all changes.

        Easiest way:
        Go to /etc/grub.d/ and move

        mv 40_custom 09_customInside 09_custom add your new menuentry (just copy the one you want from /boot/grub/grub.cfg):
        root@zerberus:/etc/grub.d# cat 09_custom
        #!/bin/sh
        exec tail -n +3 $0
        # This file provides an easy way to add custom menu entries.  Simply type the
        # menu entries you want to add after this comment.  Be careful not to change
        # the 'exec tail' line above.
                menuentry 'Ubuntu, mit Linux 4.13.0-17-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.13.0-17-generic-advanced-40d34826-4f8a-aefc' {
                        recordfail
                        load_video
                        gfxmode $linux_gfx_mode
                        insmod gzio
                        if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                        insmod part_gpt
                        insmod ext2
                        set root='hd0,gpt5'
                        if [ x$feature_platform_search_hint = xy ]; then
                          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt5 --hint-efi=hd0,gpt5 --hint-baremetal=ahci0,gpt5  40d34826-4f8a-aefc-d422f755d339
                        else
                          search --no-floppy --fs-uuid --set=root 40d34826-4f8a-aefc                fi
                        echo    'Linux 4.13.0-17-generic wird geladen …'
                        linux   /boot/vmlinuz-4.13.0-17-generic root=UUID=40d34826-4f8a-aefc ro  noplymouth resume=UUID=709f3f5a-5e73-b42d-b93915cea971
                        echo    'Initiale Ramdisk wird geladen …'
                        initrd  /boot/initrd.img-4.13.0-17-generic
                }
         And then run update-grub once again and you are done....




        PDB upgrade from 12c to 18c

        Yann Neuhaus - Sun, 2018-02-25 14:20

        Oracle 18c is out, in the Oracle Cloud, and the first thing I do with a new version is testing how long it takes to upgrade a previous version PDB by unplug/plug. Faster upgrade should be the benefit of having a slim dictionary where the system objects are reduced to metadata links and data links. However, it looks like upgrading the PDB dictionary still takes the same time as upgrading the CDB$ROOT.

        The idea is to create a DBaaS service with a new CDB in 18.1 and plug a PDB coming from 12.2.0.1. Actually, I’m saying 18.1 but that may be 18.0 as I’m now lost in those version numbers. The cloud service was created with version: “18.0.0.0”, V$VERSION displays 18.1.0.0 for the release and 18.0.0.0 for the version:
        Connected to:
        Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
        Version 18.1.0.0.0

        My understanding is that the 18.0.0.0 is the version of the 18c dictionary, which will need a full upgrade only for 19c (19.0.0.0). And 18.1.0.0 is about the version, which will be incremented by Release Updates later.

        I have an unplugged PDB that I plug into the new CDB:
        SQL> create pluggable database PDB0 using '/u01/app/temp/PDB0.pdb';
        Pluggable database PDB0 created.

        When I open it, I get a warning:
        SQL> alter pluggable database pdb0 open;
         
        ORA-24344: success with compilation error
        24344. 00000 - "success with compilation error"
        *Cause: A sql/plsql compilation error occurred.
        *Action: Return OCI_SUCCESS_WITH_INFO along with the error code
         
        Pluggable database PDB0 altered.

        Then I check the PDB PLUG IN VIOLATIONS:
         
        SQL> select * from pdb_plug_in_violations;
        TIME NAME CAUSE TYPE ERROR_NUMBER LINE MESSAGE STATUS ACTION CON_ID
        ---- ---- ----- ---- ------------ ---- ------- ------ ------ ------
        24-FEB-18 08.35.16.965295000 PM PDB0 OPTION ERROR 0 1 Database option APS mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.966343000 PM PDB0 OPTION ERROR 0 2 Database option CATALOG mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.966556000 PM PDB0 OPTION ERROR 0 3 Database option CATJAVA mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.966780000 PM PDB0 OPTION ERROR 0 4 Database option CATPROC mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.966940000 PM PDB0 OPTION ERROR 0 5 Database option CONTEXT mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.967096000 PM PDB0 OPTION ERROR 0 6 Database option DV mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.967250000 PM PDB0 OPTION ERROR 0 7 Database option JAVAVM mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.967403000 PM PDB0 OPTION ERROR 0 8 Database option OLS mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.967602000 PM PDB0 OPTION ERROR 0 9 Database option ORDIM mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.967785000 PM PDB0 OPTION ERROR 0 10 Database option OWM mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.967939000 PM PDB0 OPTION ERROR 0 11 Database option SDO mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.968091000 PM PDB0 OPTION ERROR 0 12 Database option XDB mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.968246000 PM PDB0 OPTION ERROR 0 13 Database option XML mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.968398000 PM PDB0 OPTION ERROR 0 14 Database option XOQ mismatch: PDB installed version 12.2.0.1.0. CDB installed version 18.0.0.0.0. PENDING Fix the database option in the PDB or the CDB 1
        24-FEB-18 08.35.16.971138000 PM PDB0 Parameter WARNING 0 1 CDB parameter compatible mismatch: Previous '12.2.0' Current '18.0.0' PENDING Please check the parameter in the current CDB 1
        24-FEB-18 08.35.17.115346000 PM PDB0 VSN not match ERROR 0 1 PDB's version does not match CDB's version: PDB's version 12.2.0.1.0. CDB's version 18.0.0.0.0. PENDING Either upgrade the PDB or reload the components in the PDB. 4

        The messages are clear: all components have a 12.2.0.1 dictionary and must be upgraded to a 18.0.0.0.0 one

        The PDB is opened in MIGRATE mode with only RESTRICTED sessions enabled:
        SQL> show pdbs
        SP2-0382: The SHOW PDBS command is not available.
        SQL> pdbs
         
        CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
        ------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
        2 PDB$SEED READ ONLY NO NORMAL 1201448 2 11:42:25 NONE 942476327 3958500
        3 CDB1PDB MOUNTED NORMAL 942476327 2 19:58:55 NONE 942476327 3958500
        4 PDB0 MIGRATE YES NEW 941386968 3 20:34:50 NONE 942476327 3958500

        Then, here is the upgrade for this newly plugged PDB0:

        [oracle@DBaaS18c 18c]$ dbupgrade -c PDB0
         
        Argument list for [/u01/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/catctl.pl] Run in c = PDB0
        Do not run in C = 0
        Input Directory d = 0
        Echo OFF e = 1
        Simulate E = 0
        Forced cleanup F = 0
        Log Id i = 0
        Child Process I = 0
        Log Dir l = 0
        Priority List Name L = 0
        Upgrade Mode active M = 0
        SQL Process Count n = 0
        SQL PDB Process Count N = 0
        Open Mode Normal o = 0
        Start Phase p = 0
        End Phase P = 0
        Reverse Order r = 0
        AutoUpgrade Resume R = 0
        Script s = 0
        Serial Run S = 0
        RO User Tablespaces T = 0
        Display Phases y = 0
        Debug catcon.pm z = 0
        Debug catctl.pl Z = 0
         
        catctl.pl VERSION: [18.0.0.0.0] STATUS: [Production] BUILD: [RDBMS_18.1.0.0.0_LINUX.X64_180103.1] ...

        The Build number mentions 18.1 built on 03-JAN-2018

        Look at the summary to see the time it takes:
        Oracle Database Release 18 Post-Upgrade Status Tool 02-24-2018 21:36:5
        [PDB0]  
        Component Current Full Elapsed Time
        Name Status Version HH:MM:SS
         
        Oracle Server UPGRADED 18.1.0.0.0 00:13:37
        JServer JAVA Virtual Machine UPGRADED 18.1.0.0.0 00:00:51
        Oracle XDK UPGRADED 18.1.0.0.0 00:00:21
        Oracle Database Java Packages UPGRADED 18.1.0.0.0 00:00:05
        OLAP Analytic Workspace UPGRADED 18.1.0.0.0 00:00:11
        Oracle Label Security UPGRADED 18.1.0.0.0 00:00:03
        Oracle Database Vault UPGRADED 18.1.0.0.0 00:00:34
        Oracle Text UPGRADED 18.1.0.0.0 00:00:11
        Oracle Workspace Manager UPGRADED 18.1.0.0.0 00:00:18
        Oracle Real Application Clusters UPGRADED 18.1.0.0.0 00:00:00
        Oracle XML Database UPGRADED 18.1.0.0.0 00:00:49
        Oracle Multimedia UPGRADED 18.1.0.0.0 00:01:03
        Spatial UPGRADED 18.1.0.0.0 00:02:06
        Oracle OLAP API UPGRADED 18.1.0.0.0 00:00:08
        Upgrade Datapatch 00:00:05
        Final Actions 00:00:09
        Post Upgrade 00:00:02
        Post Upgrade Datapatch 00:00:04
         
        Total Upgrade Time: 00:20:47 [PDB0]  
        Database time zone version is 26. It is older than current release time
        zone version 31. Time zone upgrade is needed using the DBMS_DST package.
         
        Grand Total Upgrade Time: [0d:0h:21m:10s]

        Capture18cCDBROOTupg
        Here we see 18.1 but the important number is the time: 21 minutes… Once again, I see no improvement in the time to upgrade the PDB dictionary. This was on a service with 2 OCPU and I’ve run a whole CDB upgrade with a similar shape and the time to upgrade the CDB$ROOT is exaclty the same – see the screenshot on the right.

        Finally I open the PDB:

        SQL> alter pluggable database pdb0 open;
        Pluggable database PDB0 altered.

        And check that the violations are resolved:

        SQL> select * from pdb_plug_in_violations where status'RESOLVED';
         
        TIME NAME CAUSE TYPE ERROR_NUMBER LINE MESSAGE STATUS ACTION CON_ID
        ---- ---- ----- ---- ------------ ---- ------- ------ ------ ------
        24-FEB-18 09.46.25.302228000 PM PDB0 OPTION WARNING 0 15 Database option RAC mismatch: PDB installed version 18.0.0.0.0. CDB installed version NULL. PENDING Fix the database option in the PDB or the CDB 4

        Ok, I suppose I can ignore that as this is not RAC.

        I’ve not seen a lot of differences in the dbupgrade output. There’s a new summary of versions before and after upgrade, which was not there in 12c:

        DOC>#####################################################
        DOC>#####################################################
        DOC>
        DOC> DIAG OS Version: linux x86_64-linux-thread-multi 2.6.39-400.211.1.el6uek.x86_64
        DOC> DIAG Database Instance Name: CDB1
        DOC> DIAG Database Time Zone Version: 31
        DOC> DIAG Database Version Before Upgrade: 12.2.0.1.0
        DOC> DIAG Database Version After Upgrade: 18.1.0.0.0
        DOC>#####################################################
        DOC>#####################################################

        However, be careful with this information. The OS Version is not correct:

        [opc@DB ~]$ uname -a
        Linux DB 4.1.12-112.14.10.el6uek.x86_64 #2 SMP Mon Jan 8 18:24:01 PST 2018 x86_64 x86_64 x86_64 GNU/Linux

        It seems that this info comes from Config.pm which is the OS version where the perl binaries were built…

        In summary, nothing changes here about the time it takes to upgrade a PDB when plugged into a new CDB.
        However, in 18c (and maybe only with next Release Updates) we should have a way to get this improved by recording the upgrade of CDB$ROOT and re-playing a trimmed version on the PDB dictionaries, in the same way as in Application Containers for application upgrades. We already see some signs of it with ‘_enable_cdb_upgrade_capture’ undocumented parameter and PDB_UPGRADE_SYNC database property. This may even become automatic when PDB is opened with the PDB_AUTO_UPGRADE property. But that’s for the future, and not yet documented.

        For the moment, you still need to run a full catupgrd on each container, through catctl.pl called by the ‘dbupgrade’ script. Here on a 2 OCPU service, it takes 20 minutes.

         

        Cet article PDB upgrade from 12c to 18c est apparu en premier sur Blog dbi services.

        datagaurd switchover switch back

        Tom Kyte - Sun, 2018-02-25 11:26
        Hi Team, I am trying to do manual switch over not by using switch over command , this is just for my learning purpose . if my understanding about the facts are correct , below are the steps : 1. shutdown the primary copy control file and re...
        Categories: DBA Blogs

        finding min max from dataset

        Tom Kyte - Sun, 2018-02-25 11:26
        I have following initial dataset <code> F_ID L_CAT CHG_DT F1 VHL 01-FEB-2016 F1 VHL 10-FEB-2016 F1 VHL 15-FEB-2016 F1 MHL 20-FEB-2016 F1 VHL 25-FEB-2016 F1 VHL 28-FEB-2016 F1 MHL 05-MAR-2016 F1 MHL 10-MAR-2016 F2 VHL 01-F...
        Categories: DBA Blogs

        Using SQL profiles and baselines in Oracle 11g Express Edition

        Tom Kyte - Sun, 2018-02-25 11:26
        Hi I'm trying to test some scenarios on SQL profiles and baselines in Oracle 11g Express edition. Even if there are active profile or baseline on a query, when executed or generated execution plan, I can see that it is not using them. Is there a...
        Categories: DBA Blogs

        Configuring huge pages for your PostgreSQL instance, Debian version

        Yann Neuhaus - Sun, 2018-02-25 10:21

        In the last post we had a look at how you can configure huge pages on RedHat and CentOS systems. For Debian and Debian based systems the procedure is different as Debian does not come with tuned. Lets see how it works there.

        Checking the basic system configuration works the same in Debian as in RedHat based distributions by checking the /proc/meminfo file:

        postgres@debianpg:/home/postgres/ [PG1] cat /proc/meminfo | grep -i huge
        AnonHugePages:         0 kB
        ShmemHugePages:        0 kB
        HugePages_Total:       0
        HugePages_Free:        0
        HugePages_Rsvd:        0
        HugePages_Surp:        0
        Hugepagesize:       2048 kB
        

        So nothing configured for huge pages in the default configuration. Using the same procedure from the last post this is how you calculate the required huge pages for the PostgreSQL instance:

        postgres@debianpg:/home/postgres/ [PG1] head -1 $PGDATA/postmaster.pid
        6661
        postgres@debianpg:/home/postgres/ [PG1] grep ^VmPeak /proc/6661/status
        VmPeak:	  393836 kB
        postgres@debianpg:/home/postgres/ [PG1] grep ^Hugepagesize /proc/meminfo
        Hugepagesize:       2048 kB
        postgres@debianpg:/home/postgres/ [PG1] echo "393836/2048" | bc
        192
        

        We’ll need at least 192 pages. Lets add that to /etc/sysctl.conf:

        postgres@debianpg:/home/postgres/ [PG1] sudo bash
        root@debianpg:/home/postgres$ echo "vm.nr_hugepages=200" >> /etc/sysctl.conf
        

        Notify the system about that change:

        root@debianpg:/home/postgres$ sysctl -p
        vm.nr_hugepages = 200
        

        … and we have 200 huge pages available:

        postgres@debianpg:/home/postgres/ [PG1] cat /proc/meminfo | grep -i huge
        AnonHugePages:         0 kB
        ShmemHugePages:        0 kB
        HugePages_Total:     200
        HugePages_Free:      200
        HugePages_Rsvd:        0
        HugePages_Surp:        0
        Hugepagesize:       2048 kB
        

        Again, lets force PostgreSQL to use huge pages and restart the instance:

        postgres@debianpg:/home/postgres/ [PG1] psql -c "alter system set huge_pages=on" postgres
        ALTER SYSTEM
        postgres@debianpg:/home/postgres/ [PG1] pg_ctl -D $PGDATA restart -m fast
        waiting for server to shut down.... done
        server stopped
        waiting for server to start....2018-02-25 17:13:59.398 CET [6918] LOG:  listening on IPv6 address "::1", port 5432
        2018-02-25 17:13:59.398 CET [6918] LOG:  listening on IPv4 address "127.0.0.1", port 5432
        2018-02-25 17:13:59.403 CET [6918] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
        2018-02-25 17:13:59.421 CET [6919] LOG:  database system was shut down at 2018-02-25 17:13:59 CET
        2018-02-25 17:13:59.427 CET [6918] LOG:  database system is ready to accept connections
         done
        server started
        

        … and that’s it:

        postgres@debianpg:/home/postgres/ [PG1] cat /proc/meminfo | grep -i huge
        AnonHugePages:         0 kB
        ShmemHugePages:        0 kB
        HugePages_Total:     200
        HugePages_Free:      193
        HugePages_Rsvd:       64
        HugePages_Surp:        0
        Hugepagesize:       2048 kB
        

        We can do the same test as in the last post to check that the number of huge pages will increase when you have load on the system:

        postgres=# create table t1 as select * from generate_series(1,1000000);
        SELECT 1000000
        postgres=# select count(*) from t1;
          count  
        ---------
         1000000
        (1 row)
        
        postgres=# \! cat /proc/meminfo | grep -i huge
        AnonHugePages:         0 kB
        ShmemHugePages:        0 kB
        HugePages_Total:     200
        HugePages_Free:      184
        HugePages_Rsvd:       55
        HugePages_Surp:        0
        Hugepagesize:       2048 kB
        

        Btw: This is on Debian 9 (not sure if it is the same for lower versions):

        postgres@debianpg:/home/postgres/ [PG1] cat /etc/os-release 
        PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
        NAME="Debian GNU/Linux"
        VERSION_ID="9"
        VERSION="9 (stretch)"
        ID=debian
        HOME_URL="https://www.debian.org/"
        SUPPORT_URL="https://www.debian.org/support"
        BUG_REPORT_URL="https://bugs.debian.org/"
        
         

        Cet article Configuring huge pages for your PostgreSQL instance, Debian version est apparu en premier sur Blog dbi services.

        Pages

        Subscribe to Oracle FAQ aggregator