Feed aggregator

Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG

Yann Neuhaus - Tue, 2016-12-20 02:00

Ref : EMC article number
The last publication date is Sat Feb 20 21:39:14 GMT 2016. Here the link: https://support.emc.com/kb/335987

After upgrading from 6.7.x to 7.2, the following warning message is logged in JMS log files: com.documentum.fc.common.DfNewInterprocessLockImpl – [DFC_FILE_LOCK_ACQUIRE_WARN] Failed to acquire lock proceeding ahead with no lock java.nio.channels.OverlappingFileLockException at sun.nio.ch.SharedFileLockTable.checkList FileLockTable.java:255)

In order to avoid this warning, EMC has provided a solution (SR #69856498) that will be described below:

By default ACS and ServerApp dfc.properties are pointing to $DOCUMENTUM_SHARED/config/dfc.properties.

Adding separate ‘dfc.data.dir’ cache folder location in ACS and ServerApp dfc.properties.
After JAVA Method Server restart, two separate cache folders are created inside $DOCUMENTUM_SHARED/jboss7.1.1/server and then, WARNING messages had gone from acs.log.

In fact, this is just a warning that someone else has acquired lock on the physical file (in this case it is dfc.keystore).  Since ServerApps (Method Server) and ACS are invoking DFC simultaneously and both try to acquire lock on dfc.keystore file and Java throws OverlappingFileLockException. Then DFC warns that it could not lock the file and proceeds without lock. Ideally this should be just info message in this case, where file lock is acquired for read-only. But the same logic is used by other functionality like registry update and BOF Cache update, where this failure should be treated as genuine warning or error. Going forward, engineering will have to correct this code by taking appropriate actions for each functionality. There is no functional impact to use different data directory folder.

Please proceed as below to solve It:

  • Login to the Content Server
  • Change the current user to dmadmin :(administrator account)
  • Create some folders using:
 mkdir $DOCUMENTUM_SHARED/acs
 mkdir $DOCUMENTUM_SHARED/ServerApps
 mkdir $DOCUMENTUM_SHARED/bpm

 

  • Update all necessary dfc.properties files (with vi editor):

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/acs

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/ServerApps

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/bpm.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/bpm

===============================================================================================================================

  • Verify that the recently created folders are empty using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

 

  • Restart the JMS using:
sh -c "cd $DOCUMENTUM_SHARED/jboss7.1.1/server;./stopMethodServer.sh"
sh -c "$DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh"

 

Verification
  • Verify that the recently created folders are now populated with default files and folders using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

Files must not be empty now.

  • Disconnect from the Content Server.

 

Using this procedure, you won’t see this WARNING message anymore.
Regards,

Source : EMC article number : 000335987

 

Cet article Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG est apparu en premier sur Blog dbi services.

datapump export using DBMS_DATAPUMP package

Tom Kyte - Mon, 2016-12-19 23:06
I have to export many tables on different schemas in a single dump file, using DBMS_DATAPUMP. If I run this command the export goes fine: expdp fr/fr dumpfile=prova.dmp logfile=prova.log directory=dfr tables=MOD_BASE.PROBE_PROFILE,MOD_DNS.SCENA...
Categories: DBA Blogs

node-oracledb 1.12.1-dev can fetch CLOBs as JavaScript String

Christopher Jones - Mon, 2016-12-19 17:12

A preview of node-oracledb 1.12.1-dev is available on GitHub and can be installed with:

  npm install oracle/node-oracledb.git#v1.12.1-dev

Node-oracledb is the Node.js add-on for Oracle Database.

The 1.12.1-dev release introduces fetchAsString support for CLOBs. Now, when CLOB columns are queried, they can be returned directly as JavaScript Strings, without the need to use Streams. To test this in the dev release make sure node-oracledb is linked with Oracle 12c client libraries.

See the extensive manual for details and examples.

Also in this release is improved support for 'temporary LOBs'. Now they can be bound as IN OUT binds, (as well as IN and OUT!)

See my previous post for a brief intro to earlier changes in this 1.12 series. The CHANGELOG has all the updates. I'll blog the features in more detail when a production bundle is released to npmjs.com.

Resources

Issues and questions about it can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line!

Node-oracledb documentation is here.

Oracle 12cR2: AWR views in multitenant

Yann Neuhaus - Mon, 2016-12-19 13:42

In a previous post I explained how the AWR views have changed between 12.1.0.1 and 12.1.0.2 and now in 12.2.0.1 they have changed again. This is a good illustration of multitenant object link usage.

What’s new in 12cR2 is the ability to run AWR snapshots at CDB or PDB level. I really think that it makes more sense to read an AWR report at CDB level because it’s about analysing the system (=instance) activity. But with PDBaaS I can understand the need to give a report to analyse PDB sessions, resource and statements.

I’ll start with the conclusion – a map of AWR view to show which ones read from CDB level snapshots, or PDB snapshots, or both:

C0DLx2GXEAALIG4
I’ll explain AWR reports in a future post. Basically when you run awrrpt.sql from CDB$ROOT you get CDB snapshots and when you run it from PDB you have the choice.

In the diagram above, just follow the arrows to know which view reads from PDB or CDB or both. You see two switches between the root and the PDB: data link for one way and common view for the other way. Note that all are metadata links so switches occurs also at parse time.

WRM$_

Let’s start from the table where AWR snapshots are stored:


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name='WRM$_SNAPSHOT';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ------------------------------ ----------------------- ------------------
SYS WRM$_SNAPSHOT TABLE METADATA LINK

This is a table. METADATA LINK means that the structure is the same in all containers, but data is different.

I have the following containers:

SQL> select con_id,dbid,name from v$containers;
 
CON_ID DBID NAME
---------- ---------- ------------------------------
1 904475458 CDB$ROOT
2 2066620152 PDB$SEED
3 2623271973 PDB1

From CDB$ROOT I see data for the CDB:

SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

and from PDB I see snapshots taken from PDB:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So remember, CDB$ROOT has 91 snapshots with DBID= 904475458 and PDB1 has 79 snapshots with DBID=2623271973

AWR_ROOT_ and AWR_PDB_

Views on WRM$_SNAPSHOT are referenced in DBA_DEPENDENCIES:


SQL> select owner,name,type from dba_dependencies where referenced_name='WRM$_SNAPSHOT' and type like 'VIEW';
 
OWNER NAME TYPE
----- ------------------------------ -------------------
SYS AWR_ROOT_SNAPSHOT VIEW
SYS AWR_ROOT_SYSSTAT VIEW
SYS AWR_ROOT_ACTIVE_SESS_HISTORY VIEW
SYS AWR_ROOT_ASH_SNAPSHOT VIEW
SYS AWR_PDB_SNAPSHOT VIEW
SYS AWR_PDB_ACTIVE_SESS_HISTORY VIEW
SYS AWR_PDB_ASH_SNAPSHOT VIEW

I’m interested in views that show snapshot information: AWR_ROOT_SNAPSHOT and AWR_PDB_SNAPSHOT


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') order by 3;
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM METADATA LINK
PUBLIC AWR_PDB_SNAPSHOT SYNONYM METADATA LINK
SYS AWR_ROOT_SNAPSHOT VIEW DATA LINK
SYS AWR_PDB_SNAPSHOT VIEW METADATA LINK

Besides the synonyms, we have a metadata link view AWR_PDB_SNAPSHOT and a data link view AWR_ROOT_SNAPSHOT. The data link one means that it switches to CDB$ROOT when queried from a PDB. Here is the definition:


SQL> select owner,view_name,container_data,text from dba_views where view_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS AWR_ROOT_SNAPSHOT Y select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0
 
SYS AWR_PDB_SNAPSHOT N select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0

Same definition. The difference is that AWR_PDB_SNAPSHOT reads from the current container but AWR_ROOT_SNAPSHOT being a DATA LINK always read from CDB$ROOT.

This is what we can see:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

This query when run in PDB1 displays the 91 snapshots from the CDB.

SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

This one shows what is in the current container.

Those are the views used by the AWR report, depending on the AWR location choice. But what about the DBA_HIST_ views that we know and use from previous versions?

DBA_HIST_ and CDB_HIST_

I continue to follow the dependencies:

SQL> select owner,name,type from dba_dependencies where referenced_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') and name like '%SNAPSHOT' order by 3;
 
OWNER NAME TYPE
------ ------------------------------ -------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM
PUBLIC AWR_PDB_SNAPSHOT SYNONYM
SYS DBA_HIST_SNAPSHOT VIEW
SYS CDB_HIST_SNAPSHOT VIEW
 
SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
SYS DBA_HIST_SNAPSHOT VIEW METADATA LINK
SYS CDB_HIST_SNAPSHOT VIEW METADATA LINK
PUBLIC DBA_HIST_SNAPSHOT SYNONYM METADATA LINK
PUBLIC CDB_HIST_SNAPSHOT SYNONYM METADATA LINK

Here are the views I’m looking for. They are metadata link only. Not data link. This means that they do not switch to CDB$ROOT.

But there’s more in the view definition:

SQL> select owner,view_name,container_data,text from dba_views where view_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS DBA_HIST_SNAPSHOT N select "SNAP_ID","DBID","INSTANCE_NUMBER","STARTUP_TIME","BEGIN_INTERVAL_TIME","
END_INTERVAL_TIME","FLUSH_ELAPSED","SNAP_LEVEL","ERROR_COUNT","SNAP_FLAG","SNAP_
TIMEZONE","CON_ID" from AWR_ROOT_SNAPSHOT
 
SYS CDB_HIST_SNAPSHOT Y SELECT k."SNAP_ID",k."DBID",k."INSTANCE_NUMBER",k."STARTUP_TIME",k."BEGIN_INTERV
AL_TIME",k."END_INTERVAL_TIME",k."FLUSH_ELAPSED",k."SNAP_LEVEL",k."ERROR_COUNT",
k."SNAP_FLAG",k."SNAP_TIMEZONE",k."CON_ID", k.CON$NAME, k.CDB$NAME FROM CONTAINE
RS("SYS"."AWR_PDB_SNAPSHOT") k

The DBA_HIST_SNAPSHOT is a simple one view on AWR_ROOT_SNAPSHOT which, as we have seen above, always show snapshots from CDB:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
&nbsp
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

Then CDB_HIST_SNAPSHOT reads AWR_PDB_SNAPSHOT which show current container snapshots. But this view is a COMMON DATA one, with the CONTAINER() function. This means that from CDB$ROOT when executed with a common user data from all open containers will be retrieved:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79
904475458 91

However, from a PDB you cannot see anything else:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So what?

Multitenant adds a new dimension in the dictionary views and we must be aware of that. However, compatibility is still there. The scripts that we used to run to query DBA_HIST views should still work. Don’t forget to always join on DBID and INSTANCE_NUMBER in addition to SNAP_ID so that your scripts are still working in RAC, and cross containers.
In 12.2 you can do the same for your application: used metadata links, data links, and common views for your tables. But remember to keep it simple…

 

Cet article Oracle 12cR2: AWR views in multitenant est apparu en premier sur Blog dbi services.

Oracle ABCS - Traversing Relationships, Conditional Navigation, Query and Update with JavaScript

Shay Shmeltzer - Mon, 2016-12-19 11:02

Oracle Application Builder Cloud Service (ABCS) lets you do a lot of things in a declarative way, however for more complex validation and conditional logic you might need to resort to some basic coding in JavaScript.

I ran into such a case with an application I was developing, and figured out that the code sample from that system will be a good way to illustrate some coding techniques in ABCS.

The application I was working on allows people to register to various events. Each event has a certain capacity, so if there are too many people registered to an event, we want the rest to be added to a wait list. For each record of a person registering, we keep a reference to the event they want to attend.  So the logic flow is:

  1. Check how many open spaces are available for the event we are trying to register for.
  2. If there is space in the event, save the new person data, and show a success message.
  3. If there isn't space, update the person "Waitlisted" field to be true, save the data, and show a message saying that the person is on the wait list. 

 

In the demo video below I'm showing how to:

  • Define declarative conditional flow of steps based on results from custom code
  • Traverse relationships between custom object through code
  • Execute a conditional query and run through the results from a custom object with code
  • Set the value for a property of a custom object through code

For reference here is the complete code from the sample:

require([
    'operation/js/api/Conditions',
    'operation/js/api/Operator'
], function (Conditions, Operator) {
    var lab = Abcs.Entities().findById('Lab');
    var id = lab.getProperty('id');
    //condition is to find a lab with the id that is in the page's drop down box
    var condition = Conditions.SIMPLE(id, Operator.EQUALS, $Person.getValue('ref2Combo_Box'));
    var operation = Abcs.Operations().read({
        entity: lab,
        condition: condition
    });
    //if query returned value we loop over the rows and check the capacity and registration columns
    operation.perform().then(function (operationResult) {
        if (operationResult.isSuccess()) {
            operationResult.getData().forEach(function (oneRecord) {
                if ((oneRecord.Capacity - oneRecord.Registered) < 1) {
                    $Person.setValue('Waitlisted', true);
                    reject("error");
                } else
                {
                    resolve("ok");
                }
            });
        }
    }).catch(function (operationResult) {
        if (operationResult.isFailure()) {
            // Insert code you want to perform if fetching of records failed
            alert('didnt worked');
            reject("error");
        }
    });
});

More information on the JavaScript APIs used here are in the Oracle ABCS Documentation.

Categories: Development

Building a Docker image with Oracle 11gXE and latest Apex distribution

Marcelo Ochoa - Mon, 2016-12-19 09:13
This year during the Oracle Development Tour at Argentina I saw that the labs have the problem that the default installation of Oracle 11g XE doesn't include an updated version of Oracle Apex.
So I decide to build a new Oracle 11g XE Docker image including latest Apex version available for download at OTN with two mainly purposes:

  • A clean installation for demos and courses
  • A simple distribution of latest Oracle Apex based Docker

Everybody knows that Docker simplify the development/testing/production lines by providing a clean/simple image distribution to run basically any Linux/Windows program on any platform,  so the steps below could be replicated on any Linux flavors (Ubuntu/RedHat/Debian), Windows environment or Mac, with the only requirement of having Docker 1.10+ installed and running.
So here the simple steps:
- Check if docker is running:
[mochoa@localhost github-public]$ docker version Client: Version:      1.12.4 API version:  1.24 Go version:   go1.6.4 Git commit:   1564f02 Built:        Mon Dec 12 23:50:16 2016 OS/Arch:      linux/amd64 
Server: Version:      1.12.4 API version:  1.24 Go version:   go1.6.4 Git commit:   1564f02 Built:        Mon Dec 12 23:50:16 2016 OS/Arch:      linux/amd64
- Build the Oracle official 11gXE image following the instruction here:
[mochoa@localhost dockerfiles]$ ./buildDockerImage.sh -v -x.. lot of outputs here ..[mochoa@localhost dockerfiles]$ docker images|grep oracle oracle/database                                        11.2.0.2-xe              ba74688a297e        39 hours ago        1.206 GBoracle/database                                        12.1.0.2-ee              af209128066e        6 days ago          11.72 GB
- Download the xe-apex-test scripts from GitHub.com, at the directory where you download the Docker scripts put latest Oracle Apex distribution, now 5.0.4, if you found a newest version please edit your Dockerfile according to the new file by modifying the environment variable APEX_FILE.
- Build your new Docker image running buildDockerImage.sh script.
[mochoa@localhost xe-apex-latest]$ ./buildDockerImage.sh .. lot of outputs here ..[mochoa@localhost xe-apex-latest]$ docker images|grep apex oracle/apex-5.0.4_en                                   11.2.0.2-xe              89ef9b5e1ced        18 hours ago        1.531 GB
Now you have two possibilities to run above image:
  • using Docker internal volume manager
  • using an external mount point for the Oracle Datafiles.
For the first use case your Docker database will start, create a new clean installation and upgrade default 11gXE Apex installation (4.0.0) to a new version 5.0.4, this database will have a random sys/system/admin password generated a boot time (see logs), Apex will be available at http://server:8080/apex/apex_admin URL. If you do stop/start Docker container the RDBMS changes persist using internal Docker volumes, if you call docker rm {container_name} command all RDBMS datafiles and Apex changes will be discarded.
Here a command example using Docker ephemeral storage:
[mochoa@localhost xe-apex-latest]$ docker run --shm-size=1g --name apex --hostname apex \ > -p 1521:1521 -p 8080:8080 \ > oracle/apex-5.0.4_en:11.2.0.2-xe ORACLE AUTO GENERATED PASSWORD FOR SYS AND SYSTEM: 0c27f3d91af388c1
Oracle Database 11g Express Edition Configuration.. lots of output here  ..Configuring database....  Loading images directory: /install/apex/images

Directory created.
A second use case is show in run-apex.sh script, before run this script create an empty directory on your local machine and change permissions to 1000:1000, for example:
[mochoa@localhost xe-apex-latest]$ sudo mkdir -p /home/data/db/apex [mochoa@localhost xe-apex-latest]$ sudo chown -R 1000:1000 /home/data/db/apex
finally call run-apex.sh script, your Oracle XE/Apex 5.0.4 will be available at http://localhost:8080/apex/apex_admin URL, remember that by default Apex user is admin and the password defined during boot time, see this screenshot on how your new Apex installation look like

first login will ask you to change default admin password and if everything is fine Apex Welcome page will be displayed:

Happy coding with Apex hosted by Docker :) next post will be about how to deploy this using Oracle Container Cloud Service.



Solar Adventures & Saving Money - Win win win win!

Chris Muir - Mon, 2016-12-19 07:24
Recently I tweeted the following pic which raised some interest, showing the kWh units of electricity used in my house compared to others in the local suburb Kensington in Perth Australia:


Our yearly electricity bill is ~AU$430, down from ~AU$754 in 2013.

I was asked what have we done to drop this low? Sold a child? Installed a massive solar system and battery?

To set context I live in a 1950s single story double brick tiled house in Perth Australia, what the British call a free standing bungalow I believe. My wife and my 2 kids under ten live in 3 bedrooms, 2 bathrooms, 1.5 living rooms and a separate office, approx 16m x 8m.  The house is equipped with an evaporative airconditioner, natural gas for cooking, heating & hot water, with the exception of the oven which is electric.  We have 2 TVs, 2 monitors, 3 laptops, lots of iphone/ipads, coffee machine, dish washer, clothes washer, fridge, microwave.  Heating is a portable natural gas burner, we don't have a clothes dryer as Perth sun provides our drying needs.

So what have we done to reduce use compared to our neighbours?

I've taken a baby steps approach to slowly improving various parts of the house.
  • We installed a 2.0kw solar about 3 years ago, it has a 2.2kw inverter, 2.0kw of panels, and at best peaks at 1.8kw during summer, about 1.3kw during winter.  The 8x250kw panels face North East.
  • In line with the solar, for things like clothes washing and the dish washer, we put them on a timer to run around lunchtime each day while we're at work & lunchtime sun means solar covers all the use.
  • I do clean the solar panels, about every 3 months with a big squidgy mop.  Perth summer air has lots of  dust, and bees seem to like dropping pollen in big wads on the panels too which is easy to clean off with instant power boost.
  • All of our lights were moved to fluorescents about 8 years ago, and now I've about 1/2 that are LEDs. Personally I'm finding LED lights way better than fluorescents which tended to blow frequently.  LEDs have come down in price, and I keep an eye out for when they're on special too.
  • We turn off phantom/standby powers use at the wall ... the tv equipment + laptops were big culprits and an easy fix.  To make this easier I've provided powerpacks with individual power switches for each point so they are easier to reach rather than the powerpoints hiding behind cabinetry.
  • We moved to a central iphone/ipad/usb/etc charger in the main living room rather than numerous powerpacks spread across the house which silently draw electricity and mostly weren't used for most of the day.  With the one charger I can easily add and remove devices once they're charged without the family caring as long as they're charged.  In turn my personal iphone & ipad are plugged in during the day for recharging, which takes me no effort at all.
  • During winter we try to use a slow cooker during the day which takes further advantage of the available sun.
  • During summer to control the house temperature I try and make use of relatively cold outside air in the morning to fill the house up, opening windows and doors to get cross breezes, and close it all down when the house equalizes to the outside temperature or a hot easterly starts blowing.  With the evap aircon turning the fan and not the water pump helps to get cold air in the house quickly in the morning if need be.  As Perth gets hot in summer (we hit 45C/113F last year), we do need to switch the evap aircon on properly during the day and late afternoon, but the goal is to cool the house with free cool air when I can.
Overall the solar has definitely made a difference but it is on the smallish side.  The incremental changes like moving the washing time, adding power efficient lighting has helped, and just identifying the standby power use I think is where the magic is.

To help identify phantom power use we have the following live monitor attached to our house meter ... I find this particularly useful for just watching what's happening in the house to see if something has been left on accidentally, or something is misbehaving.  It only shows net use after solar is taken out, but that's fine as that when we're paying the grid and this is what I want to avoid.


The main uses of electricity left when not offset by solar are things like the electric oven in the evening (my NE panels aren't good for offsetting this), the fridge which runs over the night, and the various electronics to a certain or lesser degree depending on what the kids are doing.

Overall I think the main trick in achieving what I've done has been not to attempt to go gung-ho as not only can you easily suffer burn out but the family is likely to rebel.  Instead I suggest incrementally improving the house, leaving an active note in your calendar every couple of months to remind yourself to revisit what you achieved and what you can do next.  Besides the occasional disagreement about leaving the lights or TV on, mostly my family hasn't much noticed any difference (or they're very patient with me ;-).  They still get to do what they want to do as far as I can tell.

In terms of motivation, avoiding the ever increasing electricity charges is definitely part of it.  The local government raised electricity by ~3% this year again, and are signalling 7% increases the next two.  Our house bills has dropped from AU$754.80 in 2013 before solar, to AU$395 in 2014, $445 in 2015 and $430 for 2016.

My other main motivation is this graph:



...we do get a paltry $0.07 kWh feed in tariff so the size of the blue bar is not that exciting. But what is exciting is I just like knowing that we're a net exporter of energy.  (of interest you can really see the winter drop mid year when solar isn't as effective here)

Ideally I'd really like to see the yellow bar drop some more, I think I can shave about another 1/4.  As the Australian federal government this year and for the next 15 years is now reducing the upfront solar rebate subsidy, I'm actively looking to max out our solar to 5kw.  A battery may be in the future, but currently they are still expensive here, and I suspect my solar system isn't designed well for a battery anyway.

Overall though, I'm particularly happy with the outcome to date.  It hasn't been a strain on lifestyle, I feel like I'm sticking it to our government who can't get their renewable energy act together, I'm also helping tell the fossil fuel companies where to go, and finally I'm saving money too.

Win win win win.

A Guide to the Oracle UPDATE Statement

Complete IT Professional - Mon, 2016-12-19 05:00
Have you ever needed to update data that was already in a table in Oracle? Learn how to do this with the Oracle UPDATE Statement. What Is the Oracle UPDATE Statement? The Oracle SQL UPDATE statement allows you to change data that is already in a table in SQL. The INSERT statement lets you add […]
Categories: Development

How to connect to the to database using unix termianl

Tom Kyte - Mon, 2016-12-19 04:46
Hi , I wanted to connect to oracle database using unix terminal. The Idea is that I have installed ubuntu as a virtual machine and I wanted to run procedures using ubuntu and the result of which should be seen in the terminal. Is is possible to...
Categories: DBA Blogs

Problem in EXPDP AND IMPDP with virtual column

Tom Kyte - Mon, 2016-12-19 04:46
Hey all I have create table with virtual columns like the following CREATE TABLE employees ( id NUMBER, first_name VARCHAR2(10), last_name VARCHAR2(10), salary NUMBER(9,2), comm1 NUMBER(3), comm2 NUMB...
Categories: DBA Blogs

PLSQL: Best/Alternate way to implement FAST Refresh for better performance

Tom Kyte - Mon, 2016-12-19 04:46
In my present db implementation, my db does not has any data/table. All the data it gets is from other sources using dblinks and then populate Materialized Views. These MVs in actual being used by my db to serve customer requests. To implement these ...
Categories: DBA Blogs

format disrupted upon using it

Tom Kyte - Mon, 2016-12-19 04:46
Hi Tom (or Chris or Connor), While i was trying something out i came upon a strange feature when using to_char. I have 2 columns in a with clause, a value and a format. Then I use these 2 in a to_char function. The strange thing is that the for...
Categories: DBA Blogs

ETL Offload with Spark and Amazon EMR - Part 3 - Running pySpark on EMR

Rittman Mead Consulting - Mon, 2016-12-19 03:00

In the previous articles (here, and here) I gave the background to a project we did for a client, exploring the benefits of Spark-based ETL processing running on Amazon's Elastic Map Reduce (EMR) Hadoop platform. The proof of concept we ran was on a very simple requirement, taking inbound files from a third party, joining to them to some reference data, and then making the result available for analysis.

I showed here how I built up the prototype PySpark code on my local machine, using Docker to quickly and easily make available the full development environment needed.

Now it's time to get it running on a proper Hadoop platform. Since the client we were working with already have a big presence on Amazon Web Services (AWS), using Amazon's Hadoop platform made sense. Amazon's Elastic Map Reduce, commonly known as EMR, is a fully configured Hadoop cluster. You can specify the size of the cluster and vary it as you want (hence, "Elastic"). One of the very powerful features of it is that being a cloud service, you can provision it on demand, run your workload, and then shut it down. Instead of having a rack of physical servers running your Hadoop platform, you can instead spin up EMR whenever you want to do some processing - to a size appropriate to the processing required - and only pay for the processing time that you need.

Moving my locally-developed PySpark code to run on EMR should be easy, since they're both running Spark. Should be easy, right? Well, this is where it gets - as we say in the trade - "interesting". Part of my challenges were down to the learning curve in being new to this set of technology. However, others I would point to more as being examples of where the brave new world of Big Data tooling becomes less an exercise in exciting endless possibilities and more stubbornly Googling errors due to JAR clashes and software version mismatches...

Provisioning EMR

Whilst it's possible to make the entire execution of the PySpark job automated (including the provisioning of the EMR cluster itself), to start with I wanted to run it manually to check each step along the way.

To create an EMR cluster simply login to the EMR console and click Create

I used Amazon's EMR distribution, configured for Spark. You can also deploy a MapR-based hadoop platform, and use the Advanced tab to pick and mix the applications to deploy (such as Spark, Presto, etc).

The number and size of the nodes is configured here (I used the default, 3 machines of m3.xlarge spec), as is the SSH key. The latter is very important to get right, otherwise you won't be able to connect to your cluster over SSH.

Once you click Create cluster Amazon automagically provisions the underlying EC2 servers, and deploys and configures the software and Hadoop clustering across them. Anyone who's set up a Hadoop cluster will know that literally a one-click deploy of a cluster is a big deal!

If you're going to be connecting to the EMR cluster from your local machine you'll want to modify the security group assigned to it once provisioned and enable access to the necessary ports (e.g. for SSH) from your local IP.

Deploying the code

I developed the ETL code in Jupyter Notebooks, from where it's possible to export it to a variety of formats - including .py Python script. All the comment blocks from the Notebook are carried across as inline code comments.

To transfer the Python code to the EMR cluster master node I initially used scp, simply out of habit. But, a much more appropriate solution soon presented itself - S3! Not only is this a handy way of moving data around, but it comes into its own when we look at automating the EMR execution later on.

To upload a file to S3 you can use the S3 web interface, or a tool such as Cyberduck. Better, if you like the command line as I do, is the AWS CLI tools. Once installed, you can run this from your local machine:

aws s3 cp Acme.py s3://foobar-bucket/code/Acme.py

You'll see that the syntax is pretty much the same as the Linux cp comand, specifying source and then destination. You can do a vast amount of AWS work from this command line tool - including provisioning EMR clusters, as we'll see shortly.

So with the code up on S3, I then SSH'd to the EMR master node (as the hadoop user, not ec2-user), and transfered it locally. One of the nice things about EMR is that it comes with your AWS security automagically configred. Whereas on my local machine I need to configure my AWS credentials in order to use any of the aws commands, on EMR the credentials are there already.

aws s3 cp s3://foobar-bucket/code/Acme.py ~

This copied the Python code down into the home folder of the hadoop user.

Running the code - manually

To invoke the code, simply run:

spark-submit Acme.py

A very useful thing to use, if you aren't already, is GNU screen (or tmux, if that's your thing). GNU screen is installed by default on EMR (as it is on many modern Linux distros nowadays). Screen does lots of cool things, but of particular relevance here is it lets you close your SSH connection whilst keeping your session on the server open and running. You can then reconnect at a later time back to it, and pick up where you left off. Whilst you're disconnected, your session is still running and the work still being processed.

From the Spark console you can monitor the execution of the job running, as well as digging into the details of how it undertakes the work. See the EMR cluster home page on AWS for the Spark console URL

Problems encountered

I've worked in IT for 15 years now (gasp). Never has the phrase "The devil's in the detail" been more applicable than in the fast-moving world of big data tools. It's not suprising really given the staggering rate at which code is released that sometimes it's a bit quirky, or lacking what may be thought of as basic functionality (often in areas such as security). Each of these individual points could, I suppose, be explained away with a bit of RTFM - but the nett effect is that what on paper sounds simple took the best part of half a day and a LOT of Googling to resolve.

Bear in mind, this is code that ran just fine previously on my local development environment.

When using SigV4, you must specify a 'host' parameter
boto.s3.connection.HostRequiredError: BotoClientError: When using SigV4, you must specify a 'host' parameter.

To fix, switch

conn_s3 = boto.connect_s3()  

for

conn_s3 = boto.connect_s3(host='s3.amazonaws.com')  

You can see a list of endpoints here.

boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request

Make sure you're specifying the correct hostname (see above) for the bucket's region. Determine the bucket's region from the S3 control panel, and then use the endpoint listed here.

Error: Partition column not found in schema

Strike this one off as bad programming on my part; in the step to write the processed file back to S3, I had partitionBy='', in the save function

duplicates_df.coalesce(1).write.save(full_uri,
                                     format='com.databricks.spark.csv',
                                     header='false',
                                     partitionBy='',
                                     mode='overwrite')

This, along with the coalesce (which combined all the partitions down to a single one) were wrong, and fixed by changing to:

duplicates_df.write.save(full_uri,
                         format='com.databricks.spark.csv',
                         header='false',
                         mode='overwrite')
Exception: Python in worker has different version 2.6 than that in driver 2.7, PySpark cannot run with different minor versions

To get the code to work on my local Docker/Jupyter development environment, I set an environment variable as part of the Python code to specify the Python executable:

os.environ['PYSPARK_PYTHON'] = '/usr/bin/python2'

I removed this (along with all the PYSPARK_SUBMIT_ARGS) and the code then ran fine.

Timestamp woes

In my original pySpark code I was letting it infer the schema from the source, which included it determining (correctly) that one of the columns was a timestamp. When it wrote the resulting processed file, it wrote the timestamp in a standard format (YYYY-MM-DD HH24:MI:SS). Redshift (of which more in the next article) was quite happy to process this as a timestamp, because it was one.
Once I moved the pySpark code to EMR, the Spark engine moved from my local 1.6 version to 2.0.0 - and the behaviour of the CSV writer changed. Instead of the format before, it switched to writing the timestamp in epoch form, and not just that but microseconds since epoch. Whilst Redshift could cope with epoch seconds, or milliseconds, it doesn't support microseconds, and the load job failed

Invalid timestamp format or value [YYYY-MM-DD HH24:MI:SS]

and then

Fails: Epoch time copy out of acceptable range of [-62167219200000, 253402300799999]

Whilst I did RTFM, it turns out that I read the wrong FM, taking the latest (2.0.1) instead of the version that EMR was running (2.0.0). And whilst 2.0.1 includes support for specifying the output timestampFormat, 2.0.0 doesn't.

In the end I changed the Spark job to not infer the schema, and so treat the timestamp as a string, thus writing it out in the same format. This was a successful workaround here, but if I'd needed to do some timestamp-based processing in the Spark job I'd have had to find another option.

Success!

I now had the ETL job running on Spark on EMR, processing multiple files in turn. Timings were approximately five minutes to process five files, half a million rows in total.

One important point to bear in mind through all of this is that I've gone with default settings throughout, and not made any effort to optimise the PySpark code. At this stage, it's simply proving the end-to-end process.

Automating the ETL

Having seen that the Spark job would run successfully manually, I now went to automate it. It's actually very simple to do. When you launch an EMR cluster, or indeed even if it's running, you can add a Step, such as a Spark job. You can also configure EMR to terminate itself once the step is complete.

From the EMR cluster create screen, switch to Advanced. Here you can specify exactly which applications you want deployed - and what steps to run. Remember how we copied the Acme.py code to S3 earlier? Now's when it comes in handy! We simply point EMR at the S3 path and it will run that code for us - no need to do anything else. Once the code's finished executing, the EMR cluster will terminate itself.

After testing out this approach successfully, I took it one step further - command line invocation. AWS make this ridiculously easier, because from the home page of any EMR cluster (running or not) there is a button to click which gives you the full command to run to spin up another cluster with the exact same configuration

This gives us a command like this:

    aws emr create-cluster \
    --termination-protected \
    --applications Name=Hadoop Name=Spark Name=ZooKeeper \
    --tags 'owner=Robin Moffatt' \
    --ec2-attributes '{"KeyName":"Test-Environment","InstanceProfile":"EMR_EC2_DefaultRole","AvailabilityZone":"us-east-1b","EmrManagedSlaveSecurityGroup":"sg-1eccd074","EmrManagedMasterSecurityGroup":"sg-d7cdd1bd"}' \
    --service-role EMR_DefaultRole \
    --enable-debugging \
    --release-label emr-5.0.0 \
    --log-uri 's3n://aws-logs-xxxxxxxxxx-us-east-1/elasticmapreduce/' \
    --steps '[{"Args":["spark-submit","--deploy-mode","cluster","s3://foobar-bucket/code/Acme.py"],"Type":"CUSTOM_JAR","ActionOnFailure":"TERMINATE_CLUSTER","Jar":"command-runner.jar","Properties":"","Name":"Acme"}]' \
    --name 'Rittman Mead Acme PoC' \
    --instance-groups '[{"InstanceCount":1,"InstanceGroupType":"MASTER","InstanceType":"m3.xlarge","Name":"Master instance group - 1"},{"InstanceCount":2,"InstanceGroupType":"CORE","InstanceType":"m3.xlarge","Name":"Core instance group - 2"}]' \
    --region us-east-1 \
    --auto-terminate

This spins up an EMR cluster, runs the Spark job and waits for it to complete, and then terminates the cluster. Logs written by the Spark job get copied to S3, so that even once the cluster has been shutdown, the logs can still be accessed. Seperation of compute from storage - it makes a lot of sense. What's the point having a bunch of idle CPUs sat around just so that I can view the logs at some point if I want to?

The next logical step for this automation would be the automatic invocation of above process based on the presence of a defined number of files in the S3 bucket. Tools such as Lambda, Data Pipeline, and Simple Workflow Service are all ones that can help with this, and the broader management of ETL and data processing on AWS.

Spot Pricing

You can save money further with AWS by using Spot Pricing for EMR requests. Spot Pricing is used on Amazon's EC2 platform (on which EMR runs) as a way of utilising spare capacity. Instead of paying a fixed (higher) rate for some server time, you instead 'bid' at a (lower) rate and when the demand for capacity drops such that the spot price does too and your bid price is met, you get your turn on the hardware. If the spot price goes up again - your server gets killed.

Why spot pricing makes sense on EMR particularly is that Hadoop is designed to be fault-tolerant across distributed nodes. Whilst pulling the plug on an old-school database may end in tears, dropping a node from a Hadoop cluster may simply mean a delay in the processing whilst the particular piece of (distributed) work is restarted on another node.

Summary

We've developed out simple ETL application, and got it running on Amazon's EMR platform. Whilst we used AWS because it's the client's platform of choice, in general there's no reason we couldn't take it and run it on another Hadoop platform. This could be a Hadoop platform such as Oracle's Big Data Cloud Service, Cloudera's CDH running on Oracle's Big Data Appliance, or simply a self-managed Hadoop cluster on commodity hardware.

Processing time was in the region of 30 minutes to process 2M rows across 30 files, and in a separate batch run 3.8 hours to process 283 files of around 25M rows in total.

So far, the data that we've processed is only sat in a S3 bucket up in the cloud.

In the next article we'll look at what the options are for actually analysing the data and running reports against it.

Categories: BI & Warehousing

RMAN> TRANSPORT TABLESPACE

Yann Neuhaus - Sun, 2016-12-18 03:12

In a previous post I explained how to use transportabel tablespace from a standby database. Here I’m showing an alternative where you can transport from a backup instead of a standby database. RMAN can do that since 10gR2.

Transportable Tablespace is a beautiful feature: the performance of physical copy and the flexibility of logical export/import. But it has one drawback: the source tablespace must be opened read only when you copy it and export the metadata. This means that you cannot use it from production, such as moving data to a datawarehouse ODS. There’s an alternative to that: restore the tablespace with TSPITR (tablespace point-in-time recovery) into a temporary instance and transport from there.
This is what is automated by RMAN with a simple command: RMAN> TRANSPORT TABLESPACE.

Multitenant

This blog post shows how to do that when you are in 12c multitenant architecture. Even if 12.2 comes with online PDB clone, you may want to transport a single tablespace.

You cannot run TRANSPORT TABLESPACE when connected to a PDB. Let’s test it:

RMAN> connect target sys/oracle@//localhost/PDB1
connected to target database: CDB1:PDB1 (DBID=1975603085)

Here are the datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
9 250 SYSTEM NO /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 UNDOTBS1 NO /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
3 20 TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

Let’s run the TRANSPORT TABLESPACE command:

RMAN> transport tablespace USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';
RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
 
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
 
Creating automatic instance, with SID='jlDa'
 
initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=jlDa_pitr_CDB1
compatible=12.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=768M
processes=200
db_create_file_dest=/var/tmp/AUX
log_archive_dest_1='location=/var/tmp/AUX'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used
 
starting up automatic instance CDB1
 
Oracle instance started
 
Total System Global Area 805306368 bytes
 
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created
 
Removing automatic instance
shutting down automatic instance
Oracle instance shut down
Automatic instance removed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of transport tablespace command at 12/17/2016 21:33:14
RMAN-07538: Pluggable Database qualifier not allowed when connected to a Pluggable Database

You got the idea: an auxiliary instance is automatically created but then it failed because an internal command cannot be run from a PDB.

Run from CDB

So let’s run it when connected to CDB$ROOT:

echo set on
 
RMAN> connect target sys/oracle
connected to target database: CDB1 (DBID=894360530)

Whe can see all pluggable databases and all datafiles:

RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1A
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 800 SYSTEM YES /u02/oradata/CDB1A/system01.dbf
3 480 SYSAUX NO /u02/oradata/CDB1A/sysaux01.dbf
4 65 UNDOTBS1 YES /u02/oradata/CDB1A/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u02/oradata/CDB1A/pdbseed/system01.dbf
6 350 PDB$SEED:SYSAUX NO /u02/oradata/CDB1A/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/oradata/CDB1A/users01.dbf
8 520 PDB$SEED:UNDOTBS1 NO /u02/oradata/CDB1A/pdbseed/undotbs01.dbf
9 250 PDB1:SYSTEM YES /u02/oradata/CDB1A/PDB1/system01.dbf
10 350 PDB1:SYSAUX NO /u02/oradata/CDB1A/PDB1/sysaux01.dbf
11 520 PDB1:UNDOTBS1 YES /u02/oradata/CDB1A/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u02/oradata/CDB1A/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 240 TEMP 32767 /u02/oradata/CDB1A/temp01.dbf
2 32 PDB$SEED:TEMP 32767 /u02/oradata/CDB1A/pdbseed/temp012016-08-23_14-12-45-799-PM.dbf
3 20 PDB1:TEMP 32767 /u02/oradata/CDB1A/PDB1/temp01.dbf

We can run the TRANSPORT TABLESPACE command from here, naming the tablespace prefixed with the PDB name PDB1:USERS

transport tablespace … auxiliary destination … tablespace destination

The TRANSPORT TABLESPACE command needs a destination where to put the datafiles and dump file to transport (TABLESPACE DESTINATION) and also needs an auxiliary destination (AUXILIARY DESTINATION). It seems it is mandatory, which is different from the PDBPITR where the FRA is used by default.


RMAN> transport tablespace PDB1:USERS auxiliary destination '/var/tmp/AUX' tablespace destination '/var/tmp/TTS';

And then you will see all what RMAN does for you. I’ll show most of the output.

UNDO

Restoring a tablespace will need to apply redo and then rollback the transactions that were opened at that PIT. This is why RMAN has to restore all tablespaces that may contain UNDO:

RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time
 
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace PDB1:SYSTEM
Tablespace UNDOTBS1
Tablespace PDB1:UNDOTBS1

I suppose that when the UNDO_TABLESPACE has changed in the meantime, RMAN cannot guess which tablespace covered the transactions at the requested PIT but I seen nothing in the TRANSPORT TABLESPACE syntax to specify it. That’s probably for a future post and /or SR.

Auxiliary instance

So RMAN creates an auxiliary instance with some specific parameters to be sure there’s no side effect on the source database (the RMAN target one).

Creating automatic instance, with SID='qnDA'
 
initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=qnDA_pitr_PDB1_CDB1
compatible=12.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=768M
processes=200
db_create_file_dest=/var/tmp/AUX
log_archive_dest_1='location=/var/tmp/AUX'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used
 
 
starting up automatic instance CDB1
 
Oracle instance started
 
Total System Global Area 805306368 bytes
 
Fixed Size 8793056 bytes
Variable Size 234882080 bytes
Database Buffers 553648128 bytes
Redo Buffers 7983104 bytes
Automatic instance created

Restore

The goal is to transport the tablespace, so RMAN checks that they are self-contained:

Running TRANSPORT_SET_CHECK on recovery set tablespaces
TRANSPORT_SET_CHECK completed successfully

and starts the restore of controlfile and datafiles (the CDB SYSTEM, SYSAUX, UNDO and the PDB SYSTEM, SYSAUX, UNDO and the tablespaces to transport)

contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# restore the controlfile
restore clone controlfile;
 
# mount the controlfile
sql clone 'alter database mount clone database';
 
# archive current online log
sql 'alter system archive log current';
}
executing Memory Script
 
executing command: SET until clause
 
Starting restore at 17-DEC-16
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=253 device type=DISK
 
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/autobackup/2016_12_17/o1_mf_s_930864638_d5c83gxl_.bkp tag=TAG20161217T213038
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03
output file name=/var/tmp/AUX/CDB1A/controlfile/o1_mf_d5c88zp3_.ctl
Finished restore at 17-DEC-16
 
sql statement: alter database mount clone database
 
sql statement: alter system archive log current
 
contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 9 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 11 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 10 to new;
set newname for clone tempfile 1 to new;
set newname for clone tempfile 3 to new;
set newname for datafile 12 to
"/var/tmp/TTS/users01.dbf";
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 9, 4, 11, 3, 10, 12;
 
switch clone datafile all;
}
executing Memory Script
 
executing command: SET until clause
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
executing command: SET NEWNAME
 
renamed tempfile 1 to /var/tmp/AUX/CDB1A/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 3 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_temp_%u_.tmp in control file
 
Starting restore at 17-DEC-16
using channel ORA_AUX_DISK_1
 
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /var/tmp/AUX/CDB1A/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c83n81_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:35
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00009 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00011 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00010 to /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00012 to /var/tmp/TTS/users01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fast_recovery_area/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/backupset/2016_12_17/o1_mf_nnndf_TAG20161217T213044_d5c851hh_.bkp tag=TAG20161217T213044
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:25
Finished restore at 17-DEC-16
 
datafile 1 switched to datafile copy
input datafile copy RECID=11 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_system_d5c8993k_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=12 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_system_d5c8d8ow_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=13 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_undotbs1_d5c8998b_.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=14 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undotbs1_d5c8d8g6_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=15 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/datafile/o1_mf_sysaux_d5c8996o_.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=16 STAMP=930865006 file name=/var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_sysaux_d5c8d8o7_.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=17 STAMP=930865006 file name=/var/tmp/TTS/users01.dbf

You noticed that the SYSTEM, SYSAUX, UNDO are restored in the auxiliary location but the tablespaces to transport (USERS here) goes to its destination. If you want to transport it on the same server, you can avoid any copying of it.

Recover

The recovery continues automatically to the PIT (which you can also specify with an UNTIL clause or restore point)


contents of Memory Script:
{
# set requested point in time
set until scn 1836277;
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone 'PDB1' "alter database datafile
9 online";
sql clone "alter database datafile 4 online";
sql clone 'PDB1' "alter database datafile
11 online";
sql clone "alter database datafile 3 online";
sql clone 'PDB1' "alter database datafile
10 online";
sql clone 'PDB1' "alter database datafile
12 online";
# recover and open resetlogs
recover clone database tablespace "PDB1":"USERS", "SYSTEM", "PDB1":"SYSTEM", "UNDOTBS1", "PDB1":"UNDOTBS1", "SYSAUX", "PDB1":"SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script
 
executing command: SET until clause
 
sql statement: alter database datafile 1 online
 
sql statement: alter database datafile 9 online
 
sql statement: alter database datafile 4 online
 
sql statement: alter database datafile 11 online
 
sql statement: alter database datafile 3 online
 
sql statement: alter database datafile 10 online
 
sql statement: alter database datafile 12 online
 
Starting recover at 17-DEC-16
using channel ORA_AUX_DISK_1
 
starting media recovery
 
archived log for thread 1 with sequence 30 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_30_d5c83ll5_.arc thread=1 sequence=30
archived log file name=/u02/fast_recovery_area/CDB1A/archivelog/2016_12_17/o1_mf_1_31_d5c8783v_.arc thread=1 sequence=31
media recovery complete, elapsed time: 00:00:02
Finished recover at 17-DEC-16
 
database opened
 
contents of Memory Script:
{
sql clone 'alter pluggable database PDB1 open';
}
executing Memory Script
 
sql statement: alter pluggable database PDB1 open

Export TTS

The restored tablespaces can be set read only, which was the goal.

contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'PDB1' 'alter tablespace
USERS read only';
# create directory for datapump export
sql clone 'PDB1' "create or replace directory
STREAMS_DIROBJ_DPDIR as ''
/var/tmp/TTS''";
}
executing Memory Script
 
sql statement: alter tablespace USERS read only

Now the export of metadata run (equivalent to expdp transport_tablespace=Y)


sql statement: create or replace directory STREAMS_DIROBJ_DPDIR as ''/var/tmp/TTS''
 
Performing export of metadata...
EXPDP> Starting "SYS"."TSPITR_EXP_qnDA_urDb":
 
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table "SYS"."TSPITR_EXP_qnDA_urDb" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_qnDA_urDb is:
EXPDP> /var/tmp/TTS/dmpfile.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace USERS:
EXPDP> /var/tmp/TTS/users01.dbf
EXPDP> Job "SYS"."TSPITR_EXP_qnDA_urDb" successfully completed at Sat Dec 17 21:41:06 2016 elapsed 0 00:00:47
Export completed
 
Not performing table import after point-in-time recovery

The last message let me think that the RMAN codes shares the one that manages RECOVER TABLE.

Then RMAN lists the commands to run the import (also available in a generated script) and removes the auxiliary instance.

Cleanup

Not everything has been removed:
[oracle@VM117 blogs]$ du -ha /var/tmp/AUX
0 /var/tmp/AUX/CDB1A/controlfile
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_51_d5c8k0oo_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_52_d5c8kcjp_.log
201M /var/tmp/AUX/CDB1A/onlinelog/o1_mf_53_d5c8kskz_.log
601M /var/tmp/AUX/CDB1A/onlinelog
0 /var/tmp/AUX/CDB1A/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile/o1_mf_undo_1_d5c8m1nx_.dbf
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9/datafile
521M /var/tmp/AUX/CDB1A/3ABD2FF082A634B5E053754EA8C022A9
1.1G /var/tmp/AUX/CDB1A
1.1G /var/tmp/AUX

Import TTS

In the destination you find the tablespace datafiles, the dump of metadata and a script that can be run to import it to the destination:

[oracle@VM117 blogs]$ du -ha /var/tmp/TTS
5.1M /var/tmp/TTS/users01.dbf
132K /var/tmp/TTS/dmpfile.dmp
4.0K /var/tmp/TTS/impscrpt.sql
5.2M /var/tmp/TTS

For this example, I import it on the same server, in a different pluggable database:


SQL> connect / as sysdba
Connected.
SQL> alter session set container=PDB2;
Session altered.

and simply run the script provided:

SQL> set echo on
 
SQL> @/var/tmp/TTS/impscrpt.sql
 
SQL> /*
SQL> The following command may be used to import the tablespaces.
SQL> Substitute values for and .
SQL>
SQL> impdp directory= dumpfile= 'dmpfile.dmp' transport_datafiles= /var/tmp/TTS/users01.dbf
SQL> */
SQL>
SQL> --
SQL> --
SQL> --
SQL> --
SQL> CREATE DIRECTORY STREAMS$DIROBJ$1 AS '/var/tmp/TTS/';
Directory created.
 
SQL> CREATE DIRECTORY STREAMS$DIROBJ$DPDIR AS '/var/tmp/TTS';
Directory created.
 
SQL> /* PL/SQL Script to import the exported tablespaces */
SQL> DECLARE
2 --
3 tbs_files dbms_streams_tablespace_adm.file_set;
4 cvt_files dbms_streams_tablespace_adm.file_set;
5
6 --
7 dump_file dbms_streams_tablespace_adm.file;
8 dp_job_name VARCHAR2(30) := NULL;
9
10 --
11 ts_names dbms_streams_tablespace_adm.tablespace_set;
12 BEGIN
13 --
14 dump_file.file_name := 'dmpfile.dmp';
15 dump_file.directory_object := 'STREAMS$DIROBJ$DPDIR';
16
17 --
18 tbs_files( 1).file_name := 'users01.dbf';
19 tbs_files( 1).directory_object := 'STREAMS$DIROBJ$1';
20
21 --
22 dbms_streams_tablespace_adm.attach_tablespaces(
23 datapump_job_name => dp_job_name,
24 dump_file => dump_file,
25 tablespace_files => tbs_files,
26 converted_files => cvt_files,
27 tablespace_names => ts_names);
28
29 --
30 IF ts_names IS NOT NULL AND ts_names.first IS NOT NULL THEN
31 FOR i IN ts_names.first .. ts_names.last LOOP
32 dbms_output.put_line('imported tablespace '|| ts_names(i));
33 END LOOP;
34 END IF;
35 END;
36 /
PL/SQL procedure successfully completed.
 
SQL>
SQL> --
SQL> DROP DIRECTORY STREAMS$DIROBJ$1;
Directory dropped.
 
SQL> DROP DIRECTORY STREAMS$DIROBJ$DPDIR;
Directory dropped.
 
SQL> --------------------------------------------------------------
SQL> -- End of sample PL/SQL script
SQL> --------------------------------------------------------------

Of course, you don’t need to and you can run the import with IMPDP:

SQL> alter session set container=pdb2;
Session altered.
SQL> create directory tts as '/var/tmp/TTS';
Directory created.
SQL> host impdp pdbadmin/oracle@//localhost/PDB2 directory=TTS dumpfile='dmpfile.dmp' transport_datafiles=/var/tmp/TTS/users01.dbf

You may also use convert to transport to a different endianness.

Local Undo

Note that if you run it on current 12.2.0.1.0 cloud first DBaaS you will get an error when RMAN opens the PDB in the auxiliary instance because there’s a bug with local undo. Here is the alert.log part:

PDB1(3):Opening pdb with no Resource Manager plan active
PDB1(3):CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 188743680 AUTOEXTEND ON NEXT 5242880 MAXSIZE 34359721984 ONLINE
PDB1(3):Force tablespace UNDO_1 to be encrypted with AES128
2016-12-17T18:05:14.759732+00:00
PDB1(3):ORA-00060: deadlock resolved; details in file /u01/app/oracle/diag/rdbms/fqkn_pitr_pdb1_cdb1/fqkn/trace/fqkn_ora_26146.trc
PDB1(3):ORA-60 signalled during: CREATE SMALLFILE UNDO TABLESPACE undo_1 DATAFILE SIZE 188743680 AUTOEXTEND ON NEXT 5242880 MAXSIZE 34359721984 ONLINE...
PDB1(3):Automatic creation of undo tablespace failed with error 604 60
ORA-604 signalled during: alter pluggable database PDB1 open...

I did this demo with LOCAL UNDO OFF.

So what?

You can use Transportable Tablespaces from a database where you cannot put the tablespace read-only. The additional cost of it is to recover it from a backup, along with SYSTEM, SYSAUX and UNDO. But it is fully automated with only one RMAN command.

 

Cet article RMAN> TRANSPORT TABLESPACE est apparu en premier sur Blog dbi services.

Cloning 12c SE2 Oracle Home for Windows 2012 R2

Michael Dinh - Sun, 2016-12-18 00:13

Process is pretty much similar to *nix environment with a few exceptions.

It was harder that it should be since I wanted to perform task using CLI vs GUI.

This does not cover zip and unzip of OH and I cannot believe how junky Winzip has become since I have typically been using 7-Zip.

Cloning 12c SE2 Oracle Home for Windows 2012 R2

 

 


Introduction to Crate.io and CrateDB

DBMS2 - Sat, 2016-12-17 23:27

Crate.io and CrateDB basics include:

  • Crate.io makes CrateDB.
  • CrateDB is a quasi-RDBMS designed to receive sensor data and similar IoT (Internet of Things) inputs.
  • CrateDB’s creators were perhaps a little slow to realize that the “R” part was needed, but are playing catch-up in that regard.
  • Crate.io is an outfit founded by Austrian guys, headquartered in Berlin, that is turning into a San Francisco company.
  • Crate.io says it has 22 employees and 5 paying customers.
  • Crate.io cites bigger numbers than that for confirmed production users, clearly active clusters, and overall product downloads.

In essence, CrateDB is an open source and less mature alternative to MemSQL. The opportunity for MemSQL and CrateDB alike exists in part because analytic RDBMS vendors didn’t close it off.

CrateDB’s not-just-relational story starts:

  • A column can contain ordinary values (of usual-suspect datatypes) or “objects”, …
  • … where “objects” presumably are the kind of nested/hierarchical structures that are common in the NoSQL/internet-backend world, …
  • … except when they’re just BLOBs (Binary Large OBjects).
  • There’s a way to manually define “strict schemas” on the structured objects, and a syntax for navigating their structure in WHERE clauses.
  • There’s also a way to automagically infer “dynamic schemas”, but it’s simplistic enough to be more suitable for development/prototyping than for serious production.

Crate gave an example of data from >800 kinds of sensors being stored together in a single table. This leads to significant complexity in the FROM clauses. But querying the same data in a relational schema would be at least as complicated, and probably worse.

One key to understanding Crate’s architectural choices is to note that they’re willing to have different latency/consistency standards for:

  • Writes and single-row look-ups.
  • Aggregates and joins.

And so it makes sense that:

  • Data is banged into CrateDB in a NoSQL-ish kind of way as it arrives, with RYW consistency.
  • The indexes needed for SQL functionality are updated in microbatches as soon as possible thereafter. (Think 100 milliseconds as a base case.) Crate.io characterizes the consistency for this part as “eventual”.

CrateDB will never have real multi-statement transactions, but it has simpler levels of isolation that may be called “transactions” in some marketing contexts.

CrateDB technical highlights include:

  • CrateDB records are stored as JSON documents. (Actually, I didn’t ask whether this was true JSON or rather something “JSON-like”.)
    • In the purely relational case, the documents may be regarded as glorified text strings.
    • I got the impression that BLOB storage was somewhat separate from the rest.
  • CrateDB’s sharding story starts with consistent hashing.
    • Shards are physical-only. CrateDB lacks the elasticity-friendly feature of there being many logical shards for each physical shard.
    • However, you can change your shard count, and any future inserts will go into the new set of shards.
  • In line with its two consistency models, CrateDB also has two indexing strategies.
    • Single-row/primary-key lookups have a “forward lookup” index, whatever that is.
    • Tables also have a columnar index.
      • More complex queries and aggregations are commonly done straight against the columnar index, rather than the underlying data.
      • CrateDB’s principal columnar indexing strategy sounds a lot like inverted-list, which in turn is a lot like standard text indexing.
      • Specific datatypes — e.g. geospatial — can be indexed in different ways.
    • The columnar index is shard-specific, and located at the same node as the shard.
    • At least the hotter parts of the columnar index will commonly reside in memory. (I didn’t ask whether this was via straightforward caching or some more careful strategy.)
  • While I didn’t ask about CrateDB’s replication model in detail, I gathered that:
    • Data is written synchronously to all nodes. (That’s sort of implicit in RYW consistency anyway.)
    • Common replication factors are either 1 or 3, depending on considerations such as the value of the data. But as is usual, some tables can be replicated across all nodes.
    • Data can be read from all replicas, for obvious reasons of performance.
  • Where relevant — e.g. the wire protocol or various SQL syntax specifics — CrateDB tends to emulate Postgres.
  • The CrateDB stack includes Elasticsearch and Lucene, both of which make sense in connection with Crate’s text/document orientation.

Crate.io is proud of its distributed/parallel story.

  • Any CrateDB node can plan a query. Necessary metadata for that is replicated across the cluster.
  • Execution starts on a shard-by-shard basis. Data is sorted at each shard before being sent onward.
  • Crate.io encourages you to run Spark and CrateDB on the same nodes.
    • This is supported by parallel Spark-CrateDB integration of the obvious kind.
    • Crate.io notes a happy synergy to this plan, in that Spark stresses CPU while CrateDB is commonly I/O-bound.

The CrateDB-Spark integration was the only support I could find for various marketing claims about combining analytics with data management.

Given how small and young Crate.io is, there are of course many missing features in CrateDB. In particular:

  • A query can only reshuffle data once. Hence, CrateDB isn’t currently well-designed for queries that join more than 2 tables together.
  • The only join strategy currently implemented is nested loop. Others are in the future.
  • CrateDB has most of ANSI SQL 92, but little or nothing specific to SQL 99. In particular, SQL windowing is under development.
  • Geo-distribution is still under development (even though most CrateDB data isn’t actually about people).
  • I imagine CrateDB administrative tools are still rather primitive.

In any case, creating a robust DBMS is an expensive and time-consuming process. Crate has a long road ahead of it.

Categories: Other

JET Application - Generate with Yeoman - Debug in NetBeans

Andrejus Baranovski - Sat, 2016-12-17 13:53
Let's take a look today how to debug JET application which is initially generated with Yeoman. We could debug in NetBeans, but by default application generated with Yeoman is not runnable in NetBeans, we need to add manually some config files - I will describe how. Also note - JET application created with NetBeans can't be directly served with grunt from command line, it also would require manual changes in the config. It would be nice if Oracle would make JET applications generated with Yeoman automatically runnable in NetBeans and vice versa.

I will go step by step through the process (first I would recommend to go through JET Getting Started):

1. JET application creation with Yeoman and build with Grunt
2. Manual configuration to be able to open such application in NetBeans
3. JET CSS config to be able to run such application in NetBeans

1. JET application creation with Yeoman and build with Grunt

Run command: yo oraclejet basicjetapp --template=basic. This creates simple JET application with one module:


Scripts and various modules are being generated. JET content can be located under src folder - generated application structure:


This is the most simple JET application possible, based on basic template. I have added chart into main page (I'm using Atom text editor to edit JavaScript):


Supporting variables for the chart are created in Application Controller module:


Application Controller module is included into JET main module, where bindings are applied based on Application Controller module and JET context is initialized:


You can build minified JET structure ready for deployment with Grunt command build:release. Navigate to application root folder and run it from there: grunt build:release:


This will produce web folder (name can be changed) with JET minified content:


We could run JET application with Grunt using server:release command: grunt serve:release:


JET application is running:


2. Manual configuration to be able to open such application in NetBeans

To debug JET application generated with Yeoman we would need to open it in NetBeans. Unfortunately this is not available by default. NetBeans doesn't recognize JET project and shows disabled icon:


We need to copy manually NetBeans nbproject folder from any other JET application created with NetBeans into our application root folder:


Change project.xml file and update project name property:


Update web context root in project.properties file:


Update application paths in private.xml file:


After these changes, NetBeans can recognize JET application and it can be opened:


JET application generated with Yeoman is successfully opened in NetBeans:


But there is issue when trying to run application in NetBeans - it can't find JET Alta UI CSS. JET is running, but with ugly look:


3. JET CSS config to be able to run such application in NetBeans

JET application generated with Yeoman points to CSS location which doesnt exist in folder structure:


After we run Grunt command grunt build:release it automatically updates CSS location. This is why it works with grunt serve:release:


Things are a bit different for JET application created with NetBeans. JET application created with NetBeans indeed contains JET Alta UI CSS in the folder originally referenced by JET application created with Yeoman:


I copied this folder into JET application generated with Yeoman:


This time JET application runs and displays as it should in NetBeans:


Don't forget to remove duplicate (we need it only to run/debug in NetBeans) JET Alta UI CSS files folder from release if you run grunt serve:release:


Download sample JET application from GitHub directory - basicjetapp.

Start to develop in APEX 5.1, you will gain at least an hour a day!

Dimitri Gielis - Sat, 2016-12-17 05:19
Yesterday APEX 5.1 (5.1.0.00.43) was installed on apex.oracle.com.
This means that you can start developing your apps in APEX 5.1 from now on. Unlike the early adopter releases (apexea.oracle.com) you can develop your apps on apex.oracle.com and later export them and import in your own environment once the on-premise version of APEX 5.1 is available.

APEX 5.1 is again a major update behind the scenes. The page processing is completely different from before; where previously full page reloads were done, now there's much more lightweight traffic and only necessary data is send across.

The big features in this new release are the introduction of Interactive Grids, which is both a successor for Interactive Reports as for Tabular Forms. The other big feature is the integration of Oracle JET, which you see mostly in the data visualisation (charts) part of APEX, but more components will probably follow in future versions. Although those two features addresses the most common issues we previously had (outdated tabular forms and charts), APEX 5.1 brings much more than that. Equally important for me are the "smaller" improvements which makes us even more productive. Below you find some examples...

When creating a new application, the login page is immediately a great looking page:


Previously in APEX 5.0 you had to adapt the login page, see my blog post Pimping the Login Page.

When you want your item to look like this:


APEX 5.1 has now a template option to display the Pre and Post text as a Block:


Or when you want an icon inside your item, there's an Icon CSS Class option selector which shows the gorgeous looking new handcrafted Font APEX icons:



You could do all the item customisations above in APEX 4.2 or 5.0 too, but it would require custom css and some code, whereas now it's declarative in APEX 5.1.

And there's so much more; ability to switch style by user, new packaged apps, warn on unsaved changes, no reload page on submit etc. features that haven't been talked about much yet, but which before you had to do with a plugin or a lot of custom code and now it's just there.

So those "smaller" features are actually not so small, they are an enormous timesaver and bring your apps in warp-speed to modern and great looking applications.

In the next blog posts I'll go in more detail on some specific features that will gain you at least an hour a day, but in the meantime, embrace APEX 5.1 and start earning those extra hours :)
Categories: Development

Pages

Subscribe to Oracle FAQ aggregator