Feed aggregator

Effect of a Backup Retention Policy on Backup Recovery AREA to destination

Tom Kyte - Sat, 2017-03-11 15:06
Hi, What's the effect of a backup retention policy on BACKUP RECOVERY AREA rman command ? I have all backups in the FRA, and I would like to make BACKUP RECOVERY AREA to DESTINATION (disk) to have an "alternative" backup, but I am not sure of th...
Categories: DBA Blogs

Is changing passwords via alter <username> identified by values '<hash>'; a security risk?

Tom Kyte - Sat, 2017-03-11 15:06
<code>create user testuser1 identified by "DummyPass1"; create user testuser2 identified by "DummyPass1"; select name, spare4 from sys.user$ where name in ('TESTUSER1','TESTUSER2'); </code> For the above test case, the password hashes are dif...
Categories: DBA Blogs

Is it the right time to move to MariaDB now?

Yann Neuhaus - Sat, 2017-03-11 03:06

Do you think about adding MariaDB to your database landscape or do you even think about replacing other database systems with MariaDB? Then you should register here. We will be doing a webinar with MariaDB on Thursday, the 23rd of March. The title is: “Why it is a good time to move to MariaDB now” and after a short introduction of dbi services and what we do in the open source area Bruno Šimić (Sales Engineer, MariaDB Corporation) will highlight why the time to do so is now.

mariadb_webinar

Hope to see you registered.

 

Cet article Is it the right time to move to MariaDB now? est apparu en premier sur Blog dbi services.

how to preserver double quotes in the data using sqlloader data load.

Tom Kyte - Fri, 2017-03-10 20:46
hey I have data like below and need to preserve double quotes in data. Please let me know how to load data preserving double quotes using sql loader. Here record Filed 2 last line is in multiple lines marked in bold data "Filed 1","Field 2",...
Categories: DBA Blogs

Optimize insert or update million records in a table

Tom Kyte - Fri, 2017-03-10 20:46
Hi Tom, i need your help in optimizing a procedure in less time which is originally taking 40 mins to insert/update 14+million records into a table. We have a query with a table (20+ million records) which inserts/update(14 million records) in...
Categories: DBA Blogs

Deceptive Numbers

Michael Dinh - Fri, 2017-03-10 15:09

My portfolio increased $10,000 today. Original investment is $1,000,000.

I made 50% on my investment today. Original investment is $200.

There is more to than meets the eye.


Property Graph in Oracle 12.2

Rittman Mead Consulting - Fri, 2017-03-10 11:00

The latest release of Oracle (12.2) includes support for Property Graph, previously available only as part of the Big Data Spatial and Graph tool. Unlike the latter, in which data is held in a NoSQL store (Oracle NoSQL, or Apache HBase), it is now possible to use the Oracle Database itself for holding graph definitions and analysing them.

Here we'll see this in action, using the same dataset as I've previously used - the "Panama Papers".

My starting point is the Oracle Developer Day VM, which at under 8GB is a tenth of the size of the beast that is the BigDataLite VM. BDL is great for exploring the vast Big Data ecosystem, both within and external to the Oracle world. However the Developer Day VM serves our needs perfectly here, having been recently updated for the 12.2 release of Oracle. You can also use DB 12.2 in Oracle Cloud, as well as the Docker image.

Prepare Database for Property Graph

The steps below are based on Zhe Wu's blog "Graph Database Says Hello from the Cloud (Part III)", modified slightly for the differing SIDs etc on Developer Day VM.

First, set the Oracle environment by running from a bash prompt

. oraenv

When prompted for SID enter orcl12c:

[oracle@vbgeneric ~]$ . oraenv
ORACLE_SID = [oracle] ? orcl12c  
ORACLE_BASE environment variable is not being set since this  
information is not available for the current user ID oracle.  
You can set ORACLE_BASE manually if it is required.  
Resetting ORACLE_BASE to its previous value or ORACLE_HOME  
The Oracle base has been set to /u01/app/oracle/product/12.2/db_1  
[oracle@vbgeneric ~]$

Now launch SQL*Plus:

sqlplus sys/oracle@localhost:1521/orcl12c as sysdba  

and from the SQL*Plus prompt create a tablespace in which the Property Graph data will be stored:

alter session set container=orcl;

create bigfile tablespace pgts  
datafile '?/dbs/pgts.dat' size 512M reuse autoextend on next 512M maxsize 10G  
EXTENT MANAGEMENT LOCAL  
segment space management auto;  

Now you need to do a bit of work to update the database to hold larger string sizes, following the following steps.

In SQL*Plus:

ALTER SESSION SET CONTAINER=CDB$ROOT;  
ALTER SYSTEM SET max_string_size=extended SCOPE=SPFILE;  
shutdown immediate;  
startup upgrade;  
ALTER PLUGGABLE DATABASE ALL OPEN UPGRADE;  
EXIT;  

Then from the bash shell:

cd $ORACLE_HOME/rdbms/admin  
mkdir /u01/utl32k_cdb_pdbs_output  
mkdir /u01/utlrp_cdb_pdbs_output  
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS -d $ORACLE_HOME/rdbms/admin -l '/u01/utl32k_cdb_pdbs_output' -b utl32k_cdb_pdbs_output utl32k.sql

When prompted, enter SYS password (oracle)

After a short time you should get output:

catcon.pl: completed successfully  

Now back into SQL*Plus:

sqlplus sys/oracle@localhost:1521/orcl12c as sysdba  

and restart the database instances:

shutdown immediate;  
startup;  
ALTER PLUGGABLE DATABASE ALL OPEN READ WRITE;  
exit  

Run a second script from the bash shell:

$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS -d $ORACLE_HOME/rdbms/admin -l '/u01/utlrp_cdb_pdbs_output' -b utlrp_cdb_pdbs_output utlrp.sql

Again, enter SYS password (oracle) when prompted. This step then takes a while (c.15 minutes) to run, so be patient. Eventually it should finish and you'll see:

catcon.pl: completed successfully  

Now to validate that the change has worked. Fire up SQL*Plus:

sqlplus sys/oracle@localhost:1521/orcl12c as sysdba  

And check the value for max_string, which should be EXTENDED:

alter session set container=orcl;  
SQL> show parameters max_string;

NAME                                 TYPE        VALUE  
------------------------------------ ----------- ------------------------------
max_string_size                      string      EXTENDED  
Load Property Graph data from Oracle Flat File format

Now we can get going with our Property Graph. We're going to use Gremlin, a groovy-based interpretter, for interacting with PG. As of Oracle 12.2, it ships with the product itself. Launch it from bash:

cd $ORACLE_HOME/md/property_graph/dal/groovy  
sh gremlin-opg-rdbms.sh

--------------------------------
Mar 08, 2017 8:52:22 AM java.util.prefs.FileSystemPreferences$1 run  
INFO: Created user preferences directory.  
opg-oracledb>  

First off, let's create the Property Graph object in Oracle itself. Under the covers, this will set up the necessary database objects that will store the data.

cfg = GraphConfigBuilder.\  
        forPropertyGraphRdbms().\
        setJdbcUrl("jdbc:oracle:thin:@127.0.0.1:1521/ORCL").\
        setUsername("scott").\
        setPassword("oracle").\
        setName("panama").\
        setMaxNumConnections(8).\
        build();
opg = OraclePropertyGraph.getInstance(cfg);  

You can also do this with the PL/SQL command exec opg_apis.create_pg('panama', 4, 8, 'PGTS');. Either way, the effect is the same; a set of tables created in the owner's schema:

SQL> select table_name from user_tables;  
TABLE_NAME  
------------------------------------------
PANAMAGE$  
PANAMAGT$  
PANAMAVT$  
PANAMAIT$  
PANAMASS$  

Now let's load the data. I'm using the Oracle Flat File format here, having converted it from the original CSV format using R. For more details of why and how, see my article here.

From the Gremlin prompt, run:

// opg.clearRepository();     // start from scratch
opgdl=OraclePropertyGraphDataLoader.getInstance();  
efile="/home/oracle/panama_edges.ope"  
vfile="/home/oracle/panama_nodes.opv"  
opgdl.loadData(opg, vfile, efile, 1, 10000, true, null);  

This will take a few minutes. Once it's completed you'll get null response, but can verify the data has successfully loaded using the opg.Count* functions:

opg-oracledb> opgdl.loadData(opg, vfile, efile, 1, 10000, true, null);  
==>null
opg-oracledb> opg.countEdges()  
==>1265690
opg-oracledb> opg.countVertices()  
==>838295

We can inspect the data in Oracle itself too. Here I'm using SQLcl, which is available by default on the Developer Day VM. Using the ...VT$ table we can query the number of distinct properties the nodes (verticies) in the graph:

SQL> select distinct k from panamaVT$;  
K  
----------------------------
Entity incorporation.date  
Entity company.type  
Entity note  
ID  
Officer icij.id  
Countries  
Type  
Entity status  
Country  
Source ID  
Country Codes  
Entity struck.off.date  
Entity address  
Name  
Entity jurisdiction  
Entity jurisdiction.description  
Entity dorm.date

17 rows selected.  

Inspect the edges:

[oracle@vbgeneric ~]$ sql scott/oracle@localhost:1521/orcl

SQL> select p.* from PANAMAGE$ p where rownum<5;

       EID       SVID       DVID EL               K       T V      VN VT     SL VTS  VTE  FE
---------- ---------- ---------- ---------------- ---- ---- ---- ---- ---- ---- ---- ---- ----
         6          6     205862 officer_of
        11         11     228601 officer_of
        30         36     216748 officer_of
        34         39     216487 officer_of

SQL>  

You can also natively execute some of the Property Graph algorithms from PL/SQL itself. Here is how to run the PageRank algorithm, which can be used to identify the most significant nodes in a graph, assigning them each a score (the "page rank" value):

set serveroutput on  
DECLARE  
    wt_pr  varchar2(2000); -- name of the table to hold PR value of the current iteration
    wt_npr varchar2(2000); -- name of the table to hold PR value for the next iteration
    wt3    varchar2(2000);
    wt4    varchar2(2000);
    wt5    varchar2(2000);
    n_vertices number;
BEGIN  
    wt_pr := 'panamaPR';
    opg_apis.pr_prep('panamaGE$', wt_pr, wt_npr, wt3, wt4, null);
    dbms_output.put_line('Working table names  ' || wt_pr
       || ', wt_npr ' || wt_npr || ', wt3 ' || wt3 || ', wt4 '|| wt4);
    opg_apis.pr('panamaGE$', 0.85, 10, 0.01, 4, wt_pr, wt_npr, wt3, wt4, 'SYSAUX', null, n_vertices)
;
END;  
/

When run this creates a new table with the PageRank score for each vertex in the graph, which can then be queried as any other table:

SQL> select * from panamaPR  
  2  order by PR desc
  3* fetch first 5 rows only;
      NODE         PR          C
---------- ---------- ----------
    236724 8851.73652          0
    288469 904.227685          0
    264051 667.422717          0
    285729 562.561604          0
    237076 499.739316          0

On its own, this is not so much use; but joined to the vertices table, we can now find out, within our graph, the top ranked vertices:

SQL> select pr.pr, v.k,v.V from panamaPR pr inner join PANAMAVT$ V on pr.NODE = v.vid where v.K = 'Name' order by PR desc fetch first 5 rows only;  
        PR K          V
---------- ---------- ---------------
8851.73652 Name       Portcullis TrustNet Chambers P.O. Box 3444 Road Town- Tortola British Virgin Isl  
904.227685 Name       Unitrust Corporate Services Ltd. John Humphries House- Room 304 4-10 Stockwell Stre  
667.422717 Name       Company Kit Limited Unit A- 6/F Shun On Comm Bldg. 112-114 Des Voeux Road C.- Hong  
562.561604 Name       Sealight Incorporations Limited Room 1201- Connaught Commercial Building 185 Wanc  
499.739316 Name       David Chong & Co. Office B1- 7/F. Loyong Court 212-220 Lockhart Road Wanchai Hong K

SQL>  

Since our vertices in this graph have properties, including "Type", we can also analyse it by that - the following shows the top ranked vertices that are Officers:

SQL> select V.vid, pr.pr from panamaPR pr inner join PANAMAVT$ V on pr.NODE = v.vid where v.K = 'Type' and v.V = 'Officer' order by PR desc fetch first 5 rows only;  
       VID         PR
---------- ----------
  12171184 1.99938104
  12030645 1.56722346
  12169701 1.55754873
  12143648 1.46977361
  12220783 1.39846834

which we can then put in a subquery to show the details for these nodes:

with OfficerPR as  
        (select V.vid, pr.pr
          from panamaPR pr
               inner join PANAMAVT$ V
               on pr.NODE = v.vid
         where v.K = 'Type' and v.V = 'Officer'
      order by PR desc
      fetch first 5 rows only)
select pr2.pr,v2.k,v2.v  
from OfficerPR pr2  
     inner join panamaVT$ v2
     on pr2.vid = v2.vid
where v2.k in ('Name','Countries');  
        PR K          V
---------- ---------- -----------------------
1.99938104 Countries  Guernsey  
1.99938104 Name       Cannon Asset Management Limited re G006  
1.56722346 Countries  Gibraltar  
1.56722346 Name       NORTH ATLANTIC TRUST COMPANY LTD. AS TRUSTEE THE DAWN TRUST  
1.55754873 Countries  Guernsey  
1.55754873 Name       Cannon Asset Management Limited re J006  
1.46977361 Countries  Portugal  
1.46977361 Name       B-49-MARQUIS-CONSULTADORIA E SERVICOS (SOCIEDADE UNIPESSOAL) LDA  
1.39846834 Countries  Cyprus  
1.39846834 Name       SCIVIAS TRUST  MANAGEMENT LTD

10 rows selected.  

But here we get into the limitations of SQL - already this is starting to look like a bit of a complex query to maintain. This is where PGQL comes in, as it enables to express the above request much more eloquently. The key thing with PGQL is that it understands the concept of a 'node', which removes the need for the convoluted sub-select that I had to do above to first identify the top-ranked nodes that had a given property (Type = Officer), and then for those identified nodes show information about them (Name and Countries). The above SQL could be expressed in PGQL simply as:

SELECT n.pr, n.name, n.countries  
WHERE (n WITH Type =~ 'Officer')  
ORDER BY n.pr limit 5  

At the moment Property Graph in the Oracle DB doesn't support PGQL - but I'd expect to see it in the future.

Jupyter Notebooks

As well as working with the Property Graph in SQL and Gremlin, we can use the Python API. This is shipped with Oracle 12.2. I'd strongly recommend using it through a Notebook, and this provides an excellent environment in which to prototype code and explore the results. Here I'll use Jupyter, but Apache Zeppelin is also very good.

First let's install Anaconda Python, which includes Jupyter Notebooks:

wget https://repo.continuum.io/archive/Anaconda2-4.3.0-Linux-x86_64.sh  
bash Anaconda2-4.3.0-Linux-x86_64.sh  

In the install options I use the default path (/home/oracle) as the location, and keep the default (no)

Launch Jupyter, telling it to listen on any NIC (not just localhost). If you installed anaconda in a different path from the default you'll need to amend the /home/oracle/ bit of the path.

/home/oracle/anaconda2/bin/jupyter notebook --ip 0.0.0.0

If you ran the above command from the terminal window within the VM, you'll get Firefox pop up with the following:

If you're using the VM headless you'll now want to fire up your own web browser and go to http://<ip>:8888 use the token given in the startup log of Jupyter to login.

Either way, you should now have a functioning Jupyter notebook environment.

Now let's install the Property Graph support into the Python & Jupyter environment. First, make sure you've got the right Python set, by confirming with which it's the anaconda version you installed, and when you run python you see Anaconda in the version details:

[oracle@vbgeneric ~]$ export PATH=/home/oracle/anaconda2/bin:$PATH
[oracle@vbgeneric ~]$ which python
~/anaconda2/bin/python
[oracle@vbgeneric ~]$ python -V
Python 2.7.13 :: Anaconda 4.3.0 (64-bit)  
[oracle@vbgeneric ~]$

Then run the following

cd $ORACLE_HOME/md/property_graph/pyopg  
touch README  
python ./setup.py install  

without the README being created, the install fails with IOError: [Errno 2] No such file or directory: './README'

You need to be connected to the internet for this as it downloads dependencies as needed. After a few screenfuls of warnings that appear OK to ignore, the installation should be succesful:

[...]
creating /u01/userhome/oracle/anaconda2/lib/python2.7/site-packages/JPype1-0.6.2-py2.7-linux-x86_64.egg  
Extracting JPype1-0.6.2-py2.7-linux-x86_64.egg to /u01/userhome/oracle/anaconda2/lib/python2.7/site-packages  
Adding JPype1 0.6.2 to easy-install.pth file

Installed /u01/userhome/oracle/anaconda2/lib/python2.7/site-packages/JPype1-0.6.2-py2.7-linux-x86_64.egg  
Finished processing dependencies for pyopg==1.0  

Now you can use the Python interface to property graph (pyopg) from within Jupyter, as seen below. I've put the notebook on gist.github.com meaning that you can download it from there and run it yourself in Jupyter.

Categories: BI & Warehousing

Scaling up/down Oracle Cloud Swarm services

Marcelo Ochoa - Fri, 2017-03-10 09:04
Following with last post about how to deploy an Elastic Search (ES) cluster at Oracle Cloud facilities this post will play with scaling up and down a Swarm service.
We have an ES cluster deployed including a master node, four data nodes and one ingest node:
$ eval $(docker-machine env oc5)
[oc5]$ docker service ls
ID            NAME       MODE        REPLICAS  IMAGE
a32wxoob53hv  es_ingest  replicated  1/1       elasticsearch/swarm:5.0.0
crjhzsunyj12  es_master  replicated  1/1       elasticsearch/swarm:5.0.0
w7abwyokpp6d  es_data    replicated  4/4       elasticsearch/swarm:5.0.0

Then using en example provides by the book Elastic Search 5.x Cookbook by Packt Publishing chapter 8 I uploaded at ingest node 1000 documents:
$ eval (docker-machine env oc1)
[oc1] docker ps
CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                NAMES
98dd1e6882f4        elasticsearch/swarm:5.0.0   "/docker-entrypoin..."   About an hour ago   Up About an hour    9200/tcp, 9300/tcp   es_ingest.1.svszgk01qrupuk9o7uloakprp
[oc1] docker exec -ti es_ingest.1.svszgk01qrupuk9o7uloakprp bash
root@98dd1e6882f4:/usr/share/elasticsearch# curl -XHEAD 'http://192.168.0.8:9200/test-index?pretty=true'
root@98dd1e6882f4:/usr/share/elasticsearch# curl -XDELETE 'http://192.168.0.8:9200/test-index?pretty=true'
root@98dd1e6882f4:/usr/share/elasticsearch# curl -XHEAD 'http://192.168.0.8:9200/test-index?pretty=true'
root@98dd1e6882f4:/usr/share/elasticsearch# curl -H "Expect:" -XPUT 'http://192.168.0.8:9200/test-index?pretty=true' -d '{"mappings": {"test-type": {"properties": {"name": {"term_vector": "with_positions_offsets", "boost": 1.0, "store": "yes", "type": "text"}, "title": {"term_vector": "with_positions_offsets", "boost": 1.0, "store": "yes", "type": "text"}, "parsedtext": {"index": "analyzed", "term_vector": "with_positions_offsets", "boost": 1.0, "store": "yes", "type": "text"}, "tag": {"type": "keyword", "store": "yes"}, "date": {"type": "date", "store": "yes"}, "position": {"type": "geo_point", "store": "yes"}, "uuid": {"store": "yes", "type": "keyword"}}}}, "settings": {"index.number_of_replicas": 1, "index.number_of_shards": 5}}'
root@98dd1e6882f4:/usr/share/elasticsearch# curl -XPOST 'http://192.168.0.8:9200/test-index/_refresh?pretty=true'
root@98dd1e6882f4:/usr/share/elasticsearch# curl -XGET 'http://192.168.0.8:9200/_cluster/health?wait_for_status=green&timeout=0s&pretty=true'
root@98dd1e6882f4:/usr/share/elasticsearch# curl -H "Expect:" -XPUT 'http://192.168.0.8:9200/test-index/test-type/1?pretty=true' -d '{"in_stock": false, "tag": ["perspiciatis", "ullam", "excepturi", "ex"], "name": "Valkyrie", "date": "2014-07-28T16:46:01.668683", "position": {"lat": -17.9940459163244, "lon": -15.110538312702941}, "age": 49, "metadata": [{"num": 5, "name": "Korrek", "value": "3"}, {"num": 5, "name": "Namora", "value": "5"}, {"num": 26, "name": "Nighthawk", "value": "3"}], "price": 19.62106010941592, "description": "ducimus nobis harum doloribus voluptatibus libero nisi omnis officiis exercitationem amet odio odit dolor perspiciatis minima quae voluptas dignissimos facere ullam tempore temporibus laboriosam ad doloremque blanditiis numquam placeat accusantium at maxime consectetur esse earum velit officia dolorum corporis nemo consequatur perferendis cupiditate eum illum facilis sunt saepe"}'
....
after the index is populated with documents Cerebro show this state:
the cluster is in green state with four data nodes and test-index included 5 shards and 2 replicas, now I'll scale up our cluster with two data nodes more:
[oc5]$ docker service scale es_data=6
es_data scaled to 6
after a few seconds ES detects two new nodes at the cluster

and shards and replicas start being relocated to the new facilites, this end with:
now We scale down our cluster removing one node, this simulate for example a hardware failure:
ES cluster detect a missing node and change to yellow state, after a few second a recovery process starts and re-balance all shard/replica to a new topology:
but what happen if scale down to only two nodes:
our ES cluster is re-balanced and working but still in red state which means that is not compliant with a fault tolerance preference for the index (there are 7 unassigned shard), and We missed 201 documents :(.
Well my personal conclusions about scale up/down facilities with Docker Swarm and ES are:
  • if you have a large cluster you could perfectly work with ephemeral storage in docker instances/services, remember that We never mention that our ES storage is defined externally so when scale up/down services Docker instances are created/removed discarding the index storage
  • if the ES cluster have a lot nodes you could think that is Torrent storage which means that independently if nodes are added or deleted you always have your index data persistent and the cluster is moving from green to yellow and back to green again
  • when nodes are added or deleted there will be a portion of time where your interconnect network have more traffic and the CPU usage of the nodes are up
  • you can pickup an NFS storage and mount in all swarm nodes using similar path, then using ES snapshot you can do a secure backup of the data, also there are drivers for doing snapshot using HDFS or S3.
next try will be with Oracle NoSQL cluster.




Low

Scott Spendolini - Fri, 2017-03-10 07:00

Recently, there has been a lot of buzz about "low code" development platforms.  Even the Oracle APEX team has embraced this term (see https://apex.oracle.com/lowcode/ for details). This approach allows the "citizen developer" - someone without a traditional IT background - to build basic applications with little to no code. Platforms such as QuickBase, Appian, Mendix and even SalesForce.com have popup up, offering the promise to quickly build applications with little to no code.  Users from all walks of life can now build and deploy applications to some sort of cloud in just minutes!

But is it possible to build a truly useful application with little to no code?  Perhaps.  Perhaps not.  I suppose that all depends on what you want the application to do and what data it will use.  It probably also depends on the security of the application, and how easy it will be to integrate into a corporate identity management system behind a firewall.  It also probably depends on what type of availability and reliability you need.  And it will definitely depend on how much any of these solutions cost, especially if your application gets popular and more users need to use it.  While some of these companies are solid and not going anywhere soon, a few of these names are new to me, and if they were to fold, it would not be the first time a startup failed.

While I have not tried any of the products that I mentioned, I do have a bit of experience with Oracle APEX, so I'll speak from that angle. APEX does fit into the "low code" profile, as you can easily build an application that manages data, provides visualizations, and even facilitates some basic workflows.  You can do all of this without writing any code, save for maybe a query or two.  There’s even an “App Store” of sorts - called Packaged Applications - that can get you create a fully functional point-solution application with a single click.  I’ve seen people from skilled developers with multiple IT-related degrees to vice presidents with limited IT experience build and deploy APEX applications.  The "citizen developer" term truly fits in here.

However, there is a limit as to what you can make any application do without writing code.  Over the years, APEX has done a great job of pushing this limit out further and further.  You can create a much better looking, more secure, more functional APEX application with APEX 5.1 in much less time than you could with versions as recent as APEX 4.0.  But even with the latest and greatest release, if you want to add some business rules or put basic conditions on a region, you’re likely going to have to use a little bit of code.

Think about this: if you had a couple of citizen developers build a basic project management tool in APEX, and then as it becomes more popular, more people in the organization start to use it.  Thus, the citizen developers enhance and modify it to support multiple teams, multiple users and roles, etc.  All along, there is no oversight from IT about how the application is enhanced and managed.  No standards are followed, as it’s totally up to the citizen developers to choose how it looks and works.  It’s fairly likely that at some point, bad decisions were made.  Perhaps a few security vulnerabilities were introduced, or a couple of poorly-written SQL statements were created.  It’s not all that different from the MS Access mess, but at least this one is backed up...

Low code is fine when it’s just that: low code.  As soon as applications cross some threshold and become a critical component of an enterprise, it’s time to stop calling them low code and transition their ownership to the professionals. With most of the online tools, I don’t believe that this is an option, or at least not a simple one.  If you exceed the capabilities of the platform which you’re building on, it’s going to be a good amount of work to migrate to another one.
This is where APEX differs. While APEX does seem to fit the low code moniker, it’s so much more capable than that.  Calling APEX low code makes me think of other “low” branded things: low fat, low carb, low T, etc.  It’s perfectly possible to start citizen developers in APEX, and show them how to build basic applications that meet some of their less critical business needs.  Some applications will remain somewhat simple, and that’s just fine.  But for the ones that don’t - it doesn’t take much at all to transition ownership of that application to IT, or perhaps just monitor and manage it a bit while still allowing the citizen developer to own it.

Those who have used APEX for years know this: it’s one of the few development platforms that you can become productive with in just a few days, but it will take your entire career to master all it can do.  It’s an ideal platform that meets the criteria of low code.  But it’s so much more - offering one of the most seamless transition paths from low code to enterprise applications.

@OraclePartners EMEA Identity & Cloud Security Forum 2017 - SAVE THE DATE

This year we are excited to host the Oracle EMEA Identity & Cloud Security Partner Forum 2017 at Oracle Budapest, Hungary on April 27th & 28th, 2017. The purpose of this two...

We share our skills to maximize your revenue!
Categories: DBA Blogs

@OraclePartners EMEA Identity & Cloud Security Forum 2017 - SAVE THE DATE

This year we are excited to host the Oracle EMEA Identity & Cloud Security Partner Forum 2017 at Oracle Budapest, Hungary on April 27th & 28th, 2017. The purpose of this...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Deploying Oracle E-Business Suite Web Services

This is the third posting in a blog series summarizing the new Oracle E-Business Suite 12.2 Mobile and web services functionality and recommendations for securing them.

Web services are physically deployed differently depending on whether they are defined using Representational State Transfer (REST) or Simple Object Access Protocol (SOAP).  Logically, however, both REST and SOAP web services are deployed from within the Integrated SOA Gateway (ISG). Refer to the E-Business Suite’s documentation for details, but from within the Integrated SOA Gateway, users can deploy web services by locating the particular web service and then clicking on the "Deploy" button.

If you have any questions, please contact us at info@integrigy.com

-Michael Miller, CISSP-ISSMP, CCSP, CCSK

References
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Web Services, DMZ/External, Oracle E-Business Suite
Categories: APPS Blogs, Security Blogs

ORA-01031 insufficient privileges Solution

Complete IT Professional - Fri, 2017-03-10 05:00
Are you getting the “ORA-01031 insufficient privileges” error? Learn what causes this error and how to resolve it in this article. ORA-01031 Cause The cause of the ORA-01031 error is that you’re trying to run a program or function and you don’t have the privileges to run it. This could happen in many situations, such […]
Categories: Development

12cR2: TNS_ADMIN in env.ora

Yann Neuhaus - Fri, 2017-03-10 04:37

The network files (sqlnet.ora, tnsnames.ora, lsnrctl.ora) are read by default from ORACLE_HOME/network/admin but you may have several Oracle installations, and want only one place for those files. Then you can use TNS_ADMIN environment variable. But are you sure that it is always set when starting the database? the listener? the client? They must be consistent (see https://blog.dbi-services.com/oracle-12cr2-plsql-new-feature-tnsping-from-the-database/). Then what we do for the cases where TNS_ADMIN is not set: define symbolic links from the ORACLE_HOME to the common location. It would be better to just change the default location and this is what can be done in 12.2 with env.ora

By default, the $ORACLE_HOME/env.ora is empty. There are only comments.

If you run any oracle 12.2 client the $ORACLE_HOME/ora.env will be read. If nothing is set here, then the default $ORACLE_HOME/network/admin location is read.

$ORACLE_HOME/network/admin

[oracle@VM104 tmp]$ strace -e trace=file tnsping localhost 2>&1 | grep -E "[.]ora"
lstat("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", {st_mode=S_IFREG|0644, st_size=852, ...}) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", O_RDONLY) = 3
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora", F_OK) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora", O_RDONLY) = 3
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora", F_OK) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora", O_RDONLY) = 3
access("/home/oracle/.tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)

Here I have a sqlnet.ora but no tnsnames.ora so the next locations that are searched are ~/.tnsnames.ora and /etc/tnsnames.ora

TNS_ADMIN environment variable

If I set the environment variable TNS_ADMIN to /tmp then

[oracle@VM104 tmp]$ TNS_ADMIN=/tmp strace -e trace=file tnsping localhost 2>&1 | grep -E "[.]ora"
lstat("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", {st_mode=S_IFREG|0644, st_size=852, ...}) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", O_RDONLY) = 3
access("/tmp/sqlnet.ora", F_OK) = 0
open("/tmp/sqlnet.ora", O_RDONLY) = 3
access("/tmp/sqlnet.ora", F_OK) = 0
open("/tmp/sqlnet.ora", O_RDONLY) = 3
access("/home/oracle/.tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/tmp/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)

The directory defined in TNS_ADMIN is searched first

TNS_ADMIN in env.ora

I have added the TNS_ADMIN=/tmp in the env.ora:

[oracle@VM104 tmp]$ tail -3 $ORACLE_HOME/env.ora
# Default: $ORACLE_HOME/network/admin
#
TNS_ADMIN=/tmp

When I run tnsping without setting any environment variable, I have exactly the same as before:


[oracle@VM104 tmp]$ strace -e trace=file tnsping localhost 2>&1 | grep -E "[.]ora"
lstat("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", {st_mode=S_IFREG|0644, st_size=867, ...}) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", O_RDONLY) = 3
access("/tmp/sqlnet.ora", F_OK) = 0
open("/tmp/sqlnet.ora", O_RDONLY) = 3
access("/tmp/sqlnet.ora", F_OK) = 0
open("/tmp/sqlnet.ora", O_RDONLY) = 3
access("/home/oracle/.tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/tmp/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)

The good thing about it is that the setting is centralized for all binaries running from this ORACLE_HOME set.

Both

However the setting in environment has priority over the env.ora one:

[oracle@VM104 tmp]$ TNS_ADMIN=/var/tmp strace -e trace=file tnsping localhost 2>&1 | grep -E "[.]ora"
lstat("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", {st_mode=S_IFREG|0644, st_size=867, ...}) = 0
open("/u01/app/oracle/product/12.2.0/dbhome_1/env.ora", O_RDONLY) = 3
access("/var/tmp/sqlnet.ora", F_OK) = 0
open("/var/tmp/sqlnet.ora", O_RDONLY) = 3
access("/var/tmp/sqlnet.ora", F_OK) = 0
open("/var/tmp/sqlnet.ora", O_RDONLY) = 3
access("/home/oracle/.tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/var/tmp/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
access("/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)

So the recommandation if you want to use the env.ora is not to set TNS_ADMIN, especially when starting the listener or the database, to be sure that the same environment is always used. Final note: I’ve not seen it in the documentation so if you rely on it for critical environment, better to validate with support.

 

Cet article 12cR2: TNS_ADMIN in env.ora est apparu en premier sur Blog dbi services.

Corrupted Block Recovery without Backups

Tom Kyte - Fri, 2017-03-10 02:26
Dear TOM, I have the database that doesn't have good enough backups and it went down. The database starts up to mount state, but does not open. I figured out which block is corrupted, however, I could not find what does that block belong to, rman ...
Categories: DBA Blogs

Generate a CSV Blob and store it in a table

Tom Kyte - Fri, 2017-03-10 02:26
Hi Tom, i have a requirement that i need to export a table contents into a csv file and store it as a blob in other table. But i dont have the provision to store the file in a directory so i need to perform all actions together. How can i do that?
Categories: DBA Blogs

create and insert table in same procedure

Tom Kyte - Fri, 2017-03-10 02:26
I have one procedure within a procedure I have to create a table and insert into that table, insertion values are from associative array, how do i perform this by using dynamic sql. Please suggest me. Thanks in Advance :)
Categories: DBA Blogs

DBCA problem installing on ORACLE LINUX 7.3 with ORACLE 12cR2

Tom Kyte - Fri, 2017-03-10 02:26
hi all! NOW ORACLE12cR2 for Linux released.I won't test its new feature,so dowload oracle12cR2 for linux to install on oracle linux7.3,and it appear an error when DBCA,the log is: <code>cat /u01/app/oracle/cfgtoollogs/dbca/orcl/orcl.log [ 2017-...
Categories: DBA Blogs

Archive log

Tom Kyte - Fri, 2017-03-10 02:26
why the current and oldest online redo log sequence is by 2??
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator