Feed aggregator

Configuring Django with Apache on a Raspberry Pi

The Anti-Kyte - Tue, 2017-02-21 07:07

Deb has another job for me to do around the house.
She would like to have a means of looking up which Films/TV Series we have lying around on Blu-Ray or DVD so she can save time looking for films we haven’t actually got. Just to be clear, she doesn’t mind hunting around for the disc in question, she just wants to make sure that it’s somewhere to be found in the first place.
She wants to be able to do this on any device at any time (let’s face it, there’s even a browser on your telly these days).
As DIY jobs go, this is a long way from being the worst as far as I’m concerned. After all, this time I should be able to put something together without the potential for carnage that’s usually attendant when I reach for the toolbox.

I happen to have a Raspberry Pi lying around which should serve as the perfect hardware platform for this sort of low traffic, low data-volume application.
The Pi is running Raspbian Jessie.
Therefore, Python is the obvious choice of programming language to use. By extension therefore, Django appears to be a rather appropriate framework.
In order to store the details of each movie we have, we’ll need a database. Django uses with Sqlite as the default.

We’ll also need an HTTP server. Whilst Django has it’s own built-in “development” server for playing around with, the favoured production http server appears to be Apache.

Now, getting Django and Apache to talk to each other seems to get a bit fiddly in places so what follows is a description of the steps I took to get this working…leaving out all the bits where I hammered my thumb…

Other places you may want to look

There are lots of good resources for Django out there.
The Django Project has a a list of Django Tutorials.
One particularly good beginners tutorial, especially if you have little or no experience of programming, is the Django Girls Tutorial.

Making sure that Raspbian is up-to-date

Before we start installing the bits we need, it’s probably a good idea to make sure that the OS on the Pi is up-to-date.
Therefore, open a Terminal Window on the Pi and run the following two commands…

sudo apt-get update -y
sudo apt-get upgrade -y

This may take a while, depending on how up-to-date your system is.
Once these commands have completed, you’ll probably want to make sure you haven’t got any unwanted packages lying around. To achieve this, simply run :

sudo apt-get autoremove
Python Virtual Environments

Look, don’t panic. This isn’t the sort of Virtual Environment that requires hypervisors and Virtual Machines and all that other complicated gubbins. We’re running on a Pi, after all, we really haven’t got the system resources to expend on that sort of nonsense.
A Python virtual environment is simply a way of “insulating” your application’s Python dependencies from those of any other applications you have/are/will/may develop/run on the same physical machine.

Getting this up and running is fairly simple, but first, just as a sanity check, let’s make sure that we have Python 3 installed and available :

python3 --version

python3_version

Provided all is well, then next step is to install the appropriate Python 3 package for creating and running Virtual Environments so…

sudo pip3 install virtualenv

Next, we need to create a parent directory for our application. I’m going to create this under the home directory of the pi user that I’m connected as on the pi.
I’m going to call this directory “dvds” because I want to keep the name nice and short.
To create a directory under your home in Linux…

mkdir ~/dvds

You can confirm that the directory has been created in the expected location by running …

ls -ld ~/dvds

drwxr-xr-x 5 pi pi 4096 Feb 14 13:05 /home/pi/dvds

Now…

cd ~/dvds
virtualenv dvdsenv

…will create the python executables referenced in this environment :

virtualenv

Notice that this has created a directory structure under a new directory called dvdsenv :

dvdsenv

Now start the virtualenv and note what happens to the prompt :

source dvdsenv/bin/activate

virtual_prompt

One small but welcome advantage to running in your new environment is that you don’t have to remember the “3” whenever you want to run python. The easiest way to demonstrate this is to stop the virtual environment, get the python version, then re-start the virtual environment and check again, like this…

virtual_python

Installing Django

We want to do this in our newly created virtual environment.
So, if you’re not already in it, start it up :

cd ~/dvds
source dvdsenv/bin/activate

Now we use pip3 to get django. NOTE – as with the python command, we don’t need to remember the “3” for pip inside the virtual environment…

pip install django

django_install

Still in the Virtual environment, we can now create our new django project ( be sure to be in the dvds directory we created earlier) :

cd ~/dvds
django-admin.py startproject dvds .

Note the “.” at the end of this command. that means that the directory tree structure of the new application should be created in the current directory.

Once this has run, you should see a sub-directory called dvds :

django_dir

We now need to make some changes to some of the files that Django has created in this directory. To make these changes I’m going to use the default Raspbian graphical editor, Leafpad. If you’d prefer something like nano, then knock yourself out. Just replace “leafpad” with the executable name of your editor in the commands that follow…

leafpad ~/dvds/dvds/settings.py

We need to make a couple of changes to this file.
Firstly, in the INSTALLED_APPS section of the file (around about line 33) we want to add our application – dvds. After the change, this particular section of the file should look something like this :

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'dvds',
]

The other thing to do is to make sure that STATIC_ROOT has been defined. If this does not already exist in settings.py then add it at the end of the file :

STATIC_ROOT = os.path.join( BASE_DIR, "static/")

To get Django to accept these changes we need to migrate them. Note that we need to do this from inside the virtual environment so start it if it’s not already running…

cd ~/dvds
source dvdsenv/bin/activate
./manage.py makemigrations
./manage.py migrate

migrations

Before we finally get Django up and running, we need to setup the default admin UI.
To do this, we first need to create an admin user :

./manage.py createsuperuser

superuser

…then setup the static files used by the admin app…

./manage.py collectstatic

You have requested to collect static files at the destination
location as specified in your settings:

    /home/pi/dvds/static

This will overwrite existing files!
Are you sure you want to do this?

Type 'yes' to continue, or 'no' to cancel:

Type “yes” and you’ll get …

Copying '/home/pi/dvds/dvdsenv/lib/python3.4/site-packages/django/contrib/admin/static/admin/css/base.css'
Copying '/home/pi/dvds/dvdsenv/lib/python3.4/site-packages/django/contrib/admin/static/admin/css/widgets.css'
Copying '/home/pi/dvds/dvdsenv/lib/python3.4/site-packages/django/contrib/admin/static/admin/css/rtl.css'
...
Copying '/home/pi/dvds/dvdsenv/lib/python3.4/site-packages/django/contrib/admin/static/admin/js/admin/RelatedObjectLookups.js'

61 static files copied to '/home/pi/dvds/static'.

Now we can test that everything is working as expected by running Django’s own “development” http server :

./manage.py runserver

django_server

If we now point the Epiphany browser on the pi to that address, we should see the default Django page :

django_default

Better even than that, if you append “/admin” to the url – i.e.

http://127.0.0.1:8000/admin

You should see…

admin_login

Using the username and password you just created for with the “createsuperuser” command just now, you should get access to :

admin_page_new

Installing Apache

This is fairly straight forward, to start with at least.
First of all, you don’t need to be in the Python Virtual Environment for this so, if you are then deactivate it :

deactivate

Once this command has completed, the prompt should now return to normal.

I’ll be sure to tell you when you need the Virtual Environment again.

To install Apache…

sudo apt-get install apache2 -y

Once that’s completed, you should be able to confirm that Apache is up and running simply by pointing your browser to :

http://localhost

…which should display the Apache Default Page :

apache_default_page

In addition to Apache itself, we need some further packages to persuade Apache to serve pages from our Django application :

sudo apt-get install apache2-dev -y
sudo apt-get install apache2-mpm-worker -y
sudo apt-get install libapache2-mod-wsgi-py3 

Got all that ? Right…

Configuring Apache to serve Django Pages using WSGI

First of all, we need to tell Apache about our Django application. To do this we need to edit the 000-default.conf which can be found in the Apache directories :

leafpad /etc/apache2/sites-available/000-default.conf

We need to add some entries to the section of the file. Once we’re done, the entire file should look something like this :

<VirtualHost *:80>
	# The ServerName directive sets the request scheme, hostname and port that
	# the server uses to identify itself. This is used when creating
	# redirection URLs. In the context of virtual hosts, the ServerName
	# specifies what hostname must appear in the request's Host: header to
	# match this virtual host. For the default virtual host (this file) this
	# value is not decisive as it is used as a last resort host regardless.
	# However, you must set it for any further virtual host explicitly.
	#ServerName www.example.com

	ServerAdmin webmaster@localhost
	DocumentRoot /var/www/html

	# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
	# error, crit, alert, emerg.
	# It is also possible to configure the loglevel for particular
	# modules, e.g.
	#LogLevel info ssl:warn

	ErrorLog ${APACHE_LOG_DIR}/error.log
	CustomLog ${APACHE_LOG_DIR}/access.log combined

	# For most configuration files from conf-available/, which are
	# enabled or disabled at a global level, it is possible to
	# include a line for only one particular virtual host. For example the
	# following line enables the CGI configuration for this host only
	# after it has been globally disabled with "a2disconf".
	#Include conf-available/serve-cgi-bin.conf

 Alias /static /home/pi/dvds/static
    <Directory /home/pi/dvds/static> 
        Require all granted
    </Directory>

    <Directory /home/pi/dvds/dvds>
        <Files wsgi.py>
            Require all granted
        </Files>
    </Directory>

    WSGIDaemonProcess dvds python-path=/home/pi/dvds python-home=/home/pi/dvds/dvdsenv
    WSGIProcessGroup dvds
    WSGIScriptAlias / /home/pi/dvds/dvds/wsgi.py
</VirtualHost>

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

Next, we need to make sure that Apache has access to the bits of Django it needs. To do this, we’ll give access to the group that the user under which Apache runs belongs to :

chmod g+w ~/dvds/db.sqlite3
chmod g+w ~/dvds
sudo chown :www-data db.sqlite3
sudo chown :www-data ~/dvds

After all of that, the “Apache” group (www-data) should be the group owner of the Virtual environment as well as our SQLITE database :

apache_privs

Finally, we need to re-start Apache for these changes to take effect :

sudo service apache2 restart

If we now go to the Apache url (http://localhost), we can see that it’s now showing the Django default page :

django_in_apache
If you see that then, congratulations, it works !

Accessing Django from another computer on the network

The server name of my Raspberry Pi is raspberrypi. If you want to check that this is the case for you, simply open a Terminal on the pi and run :

uname -n

In order to access the application from other computers on my local network, I’ll need to add this server name to the ALLOWED_HOSTS list in the settings.py file of the application.

To do this :

leafpad ~/dvds/dvds/settings.py

Amend the ALLOWED_HOSTS entry from this :

ALLOWED_HOSTS=[]

…to this…

ALLOWED_HOSTS=['raspberrypi']

And you should now be able to access the Django application from a remote machine by using the url :

raspberrypi

…like this…

remote_page_view

Hopefully, this has all helped you to get up and running without hitting your thumb.


Filed under: python Tagged: Apache, Django, Python virtual environment, Raspberry Pi

Webcast: "Simplified and Touch-Friendly User Interface in EBS"

Steven Chan - Tue, 2017-02-21 02:05

OAF WebcastOracle University has a wealth of free webcasts for Oracle E-Business Suite.  If you're looking for the latest updates on our rapidly-evolving OA Framework (OAF) user interface capabilities, see:

Senthilkumar Ramalingam, Group Manager Product Development, shares the latest Oracle Applications Framework (OAF) which includes the Oracle Alta UI, a redesigned home page and a set of new components optimized for display on mobile devices such as tablets.  Oracle Alta UI is the next generation user interface standards from Oracle that offer a modern and compelling UI for both cloud and on-premise applications. In addition, the OAF UI components offer several touch-friendly gestures for common actions, for a smarter and more efficient end user experience. The session also covers major UI enhancements in components like tables, search and attachments. Come see the new components, new gesture-based touch interactions, and a modernized UI that completely transform the Oracle E-Business Suite end user experience. This material was presented at Oracle OpenWorld 2016.

Categories: APPS Blogs

APEX 5.1 New Features - neuer Termin

Denes Kubicek - Tue, 2017-02-21 00:33
APEX 5.1 ist endlich da. Oracle Application Express wird mit jedem Release besser und zieht immer mehr Entwickler weltweit in seinen Bann. Es ist einfach, einfache wie auch komplexe Applikationen auf Basis des Oracle Stacks zu entwickeln. Es macht sogar richtig Spaß !

Mit APEX 5.1 sind als wichtiges neues Feature die Interactive Grids mit dazu gekommen. Wir haben sehr lange auf eine moderne Möglichkeit gewartet, Excel - ähnliche Funktionen auf einer Webseite mit APEX zu implementieren. Jetzt ist es endlich soweit :) . Sogar Master-Detail-Detail-Detail-... Beziehungen sind umsetzbar, unsere Anwender werden sich freuen.

Darüber hinaus gibt es auch in vielen anderen Bereichen wichtige Neuerungen, die uns das Leben erleichtern. Gleichzeitig sind aber auch einige wichtige Dinge zu beachten, damit wir ein reibungsloses Upgrade durchführen können.

In diesem Kurs lernen Sie die neuen Funktionalitäten von Oracle Application Express 5.1 kennen, insbesondere wie Sie diese erfolgreich in der Praxis einsetzen.

Lernen Sie von und diskutieren Sie mit den weltweit bekannten Oracle APEX Experten:

Denes Kubicek, langjährige Projekterfahrung in den Bereichen Oracle und APEX mit multisite Applikationen, Preisträger des "Oracle APEX Developer of the Year 2008" Awards des Oracle Magazines, ein Oracle ACE Director und sehr bekannt im Oracle APEX OTN Forum für seine Beispielapplikation und

Dietmar Aust, erfahrener Oracle Consultant mit Spezialisierung auf Oracle APEX, Oracle ACE und aktiv in den OTN Foren zu APEX und Oracle XE, mit regelmäßigen Präsentationen auf den einschlägigen Oracle Konferenzen (DOAG, ODTUG, Oracle Open World).



Categories: Development

AMIS Tools Showroom – The Sequel – Donderdag 16 maart 2017

Amis Blog - Mon, 2017-02-20 23:25

Donderdag 16 maart

17.00-21.00 uur

AMIS, Nieuwegein

Aanmelden via: bu.om@amis.nl

Op donderdag 16 maart vindt de tweede AMIS Tools Showroom Sessie plaats. De eerste sessie was op 13 december: hierin hebben 16 AMIS-ers in korte en hele korte presentaties en demonstraties allerlei handige tools en hulpmiddelen laten zien. De nadruk in deze sessie lag op tools voor monitoring, communicatie en collaboration.

In deze tweede sessie gaan we op zoek naar nog een collectie tools. Deze uitnodiging betreft dan ook twee aspecten:

· Wil je er op 16 maart bij zijn om tools door je vakbroeders gepresenteerd te krijgen?

· Heb jij een tool waarover je tijdens deze sessie wil presenteren? Denk bijvoorbeeld aan tools rondom web conferencing & video streaming, screen cams, text editing, chat, image editing, data visualisatie, document sharing, voice recognition. En andere tools, apps en plugins die jij handig vindt in je werk en die je aan je vakgenoten zou willen laten zien – in een korte presentatie (5-15 min) – liefst met een demo.

Zou je via het volgende formulier willen aangeven welke tools voor jou interessant zijn en over welk tool jij wel zou willen presenteren: https://docs.google.com/forms/d/e/1FAIpQLSdNPwUACXxWaZGfs911UraVFQp5aWqeJVEx0xrSRFQTcYnYXA/viewform .
Op basis van de resultaten van deze survey kunnen we de agenda samenstellen voor deze sessie.

The post AMIS Tools Showroom – The Sequel – Donderdag 16 maart 2017 appeared first on AMIS Oracle and Java Blog.

Database Star Academy Membership is Now Open

Complete IT Professional - Mon, 2017-02-20 21:15
The Database Star Academy membership is now open! Here’s what you need to know. What Is the Database Star Academy Membership? It’s a membership site with a monthly fee, that gives you access to many different online video courses and PDF guides related to Oracle database development. What’s Included? As part of your membership, you […]
Categories: Development

Using bitmap indexes in OLTP database with mostly inserts

Tom Kyte - Mon, 2017-02-20 13:06
Hi, We have a table for logging metadata about processed messages in a production system. The table have approx 32M rows today and 25 columns. The <b>total number of rows is expected to be around 100M</b> in the future. When the processing of a...
Categories: DBA Blogs

Trigger based on set of data

Tom Kyte - Mon, 2017-02-20 13:06
Hi All, i have a scenario where I want to create a trigger which will generate a flat file whenever a set of data like department number's(10,20,30,40,50...) changes(insert/updates) on a particular date(sysdate).
Categories: DBA Blogs

How to fetch up to 5MB of text/string/data from a larger a clob in oracle

Tom Kyte - Mon, 2017-02-20 13:06
Hi Oracle, <code> CREATE TABLE XMLISSUE ( xmltablecolumn clob ); </code> Created a table As shown below code I have inserted some data to clob <code> DECLARE XMLCLO...
Categories: DBA Blogs

when I drop a plsql function, 1 view goes invalid, 1 view and 1 procedure remain valid

Tom Kyte - Mon, 2017-02-20 13:06
We recently upgraded to database version 12c... We found an anomaly that we can't explain and wondering if this is a bug or a misunderstanding of new functionality within 12c. We can reproduce this issue on demand with the following example....
Categories: DBA Blogs

use of Block change tracking file for offline incremental backup

Tom Kyte - Mon, 2017-02-20 13:06
Hi , If block change tracking is ENABLED, Does oracle uses this file when we take an offline incremental backup ( in mount mode ) ? As the Db is not open , I think it can not access the file ( change tracking file), I believe the feature is only...
Categories: DBA Blogs

Converting a column from one data type to another in PostgreSQL

Yann Neuhaus - Mon, 2017-02-20 11:15

Last week at a customer the following question came up: We have a varchar2 column in Oracle that contains a date but actually is stored as a string. When we want to convert this column to be a real date data type in Oracle we can use dbms_redefinition. How can we do that in PostgreSQL? The first answer that came to me mind is: Add a new column (of data type date) and populate it with the converted string from the source column, drop the source column and rename the new column to the name of the dropped column. This for sure will work but it is not the most convenient way: What happens if the application is somehow dependent on the order of the columns? Shouldn’t be the case if the application is written well, but you never know.

To start with lets generate some test data:

drop table if exists t1;
create table t1 ( a varchar(20) );
insert into t1 (a) values ('01012017');
insert into t1 (a) values ('02012017');
insert into t1 (a) values ('03012017');
insert into t1 (a) values ('04012017');
insert into t1 (a) values ('05012017');
insert into t1 (a) values ('06012017');
insert into t1 (a) values ('07012017');
insert into t1 (a) values ('08012017');
insert into t1 (a) values ('09012017');
insert into t1 (a) values ('10012017');
insert into t1 (a) values ('11012017');
(postgres@[local]:5440) [postgres] > select * from t1;
    a     
----------
 01012017
 02012017
 03012017
 04012017
 05012017
 06012017
 07012017
 08012017
 09012017
 10012017
 11012017
(11 rows)

We now have a varchar column holding the date values as a string. When you look at the “alter table” command in PostgtreSQL you’ll notice something like this:

where action is one of:

    ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
    DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ]
    ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ USING expression ]

What will really help here is the “using” keyword because you can do things like this:

(postgres@[local]:5440) [postgres] > alter table t1 alter column a type date using to_date(a,'DDMMYYYY');
ALTER TABLE
(postgres@[local]:5440) [postgres] > \d t1
     Table "public.t1"
 Column | Type | Modifiers 
--------+------+-----------
 a      | date | 

That’s really cool: You can pass a function (this mean a self written function as well) to the alter command to specify on how you want to do the conversion.

When there is an index on the column, what happens to the index?

(postgres@[local]:5440) [postgres] > \d t1
             Table "public.t1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | character varying(20) | 
Indexes:
    "i1" btree (a)
(postgres@[local]:5440) [postgres] > alter table t1 alter column a type date using to_date(a,'DDMMYYYY');
ALTER TABLE
Time: 5.931 ms
(postgres@[local]:5440) [postgres] > \d t1
     Table "public.t1"
 Column | Type | Modifiers 
--------+------+-----------
 a      | date | 
Indexes:
    "i1" btree (a)
(postgres@[local]:5440) [postgres] > select indisvalid,indisready,indislive from pg_index where indexrelid = 'i1'::regclass; 
 indisvalid | indisready | indislive 
------------+------------+-----------
 t          | t          | t
(1 row)

Looks fine as well, lets do a quick test if the index is really usable:

Time: 0.453 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 11
Time: 2.373 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 22
Time: 39.653 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 44
Time: 1.110 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 88
Time: 1.072 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 176
Time: 1.455 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 352
Time: 1.432 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 704
Time: 3.344 ms
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 1408
Time: 20.972 ms
(postgres@[local]:5440) [postgres] > explain analyze select a from t1 where a = to_date('01012017','dd.mm.yyyy');
                                                 QUERY PLAN                                                 
------------------------------------------------------------------------------------------------------------
 Index Only Scan using i1 on t1  (cost=0.27..8.29 rows=1 width=4) (actual time=0.012..0.012 rows=0 loops=1)
   Index Cond: (a = to_date('01012017'::text, 'dd.mm.yyyy'::text))
   Heap Fetches: 0
 Planning time: 0.062 ms
 Execution time: 0.029 ms
(5 rows)

Perfect. The remaining question is: Does this operation block others from reading the table while it is being executed? Lets generate some more date to make the select operation a bit longer and then “watch” the statement in a separate session while we execute the conversion:

Time: 0.453 ms
drop table if exists t1;
create table t1 ( a varchar(20) );
insert into t1 (a) values ('01012017');
insert into t1 (a) values ('02012017');
insert into t1 (a) values ('03012017');
insert into t1 (a) values ('04012017');
insert into t1 (a) values ('05012017');
insert into t1 (a) values ('06012017');
insert into t1 (a) values ('07012017');
insert into t1 (a) values ('08012017');
insert into t1 (a) values ('09012017');
insert into t1 (a) values ('10012017');
insert into t1 (a) values ('11012017');
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 11
-- and so on and so on ...
(postgres@[local]:5440) [postgres] > insert into t1 select * from t1;
INSERT 0 360448

Then, in session 1 I am doing this:

Time: 0.453 ms
(postgres@[local]:5440) [postgres] > select count(*) from t1 where a = '01012017';
 count  
--------
 131072
(1 row)
(postgres@[local]:5440) [postgres] > \watch 0.1

In session 2 I am doing the conversion:

Time: 0.453 ms
(postgres@[local]:5440) [postgres] > alter table t1 alter column a type date using to_date(a,'DDMMYYYY');
ALTER TABLE

In session 1 you’ll notice that the statement is blocked and finally you get this:

Time: 0.453 ms
ERROR:  date/time field value out of range: "01012017"
LINE 1: select count(*) from t1 where a = '01012017';
                                          ^
HINT:  Perhaps you need a different "datestyle" setting.

Conclusion: Converting a column from one data type to another is nothing you want to do when you system is live. When you can afford some downtime the “using” syntax is what you want to do and it is working quite fast. Hope this helps …

 

Cet article Converting a column from one data type to another in PostgreSQL est apparu en premier sur Blog dbi services.

OBIEE 12c - Regression Tester for Application Roles and Mappings

Rittman Mead Consulting - Mon, 2017-02-20 10:08

Allow me to introduce you to the Application Role Mapping validation script for OBIEE 12c. With this bit of code, we can utilize OBIEE's runcat command script to export our application role and permissions mapping information from multiple environments, and ensure their consistency. The picture below is an example of what you'll see as the final product. While it does not show you a side by side comparison of both environments, what it does do is display those objects in your lower environment that contain inconsistent application role or permissions mappings vs the target environment, and their respective configuration. The items in the path column then point you to that object in your lower environment catalog for further examination.

sec_audit_pre

Our script only requires a few easy steps to generate a web-based view of any inconsistencies in application role and permissions mappings between your tested environments. Similar to the Baseline Validation Tool (BVT), this script goes one step further and executes a fine-grain examination and resulting view of application role and permissions mappings. The BVT only catches that something is different about the object, as indicated by the Name column value, and tells you where to look.

BVT_audit

While I'll be sure to go into more detail later, the first picture above shows us that we have a number of application role and/or permissions mappings that exist in the lower environment, however, do not in the target OBIEE environment. Curious? Let's jump right into it.


Overview


The security audit is essentially a 3-step process, and was designed to be really accessible and simple to use. It breaks out like this:

  • Run the security_audit.py script in both OBIEE12c environments (being the lower environment that possesses the proper app role/permission mappings and the target environment).
  • security_audit.py will generate a .csv file in each environment.
  • Move the CSV from the target environment into the directory where you've got the CSV in the lower environment. After you've got the files moved, you'll run security_compare.py and simply pass in the locations of your lower environment CSV, and then that of the target environment. Lastly, a browser will pop up, giving you an immediate view of any inconsistencies that exist between your two OBIEE 12c instances.

Let's take a look at the process in a bit more detail!


Step by Step


Run security_audit.py in Lower Environment

First, let's make sure we've got a few libraries installed that we'll need to run our code. I recommend using pip for this. You'll need to install pandas and flask if you have not done so already. Simply navigate to the security_audit directory you got from GitHub and then from the command line run:

python3 security_audit.py

First, the script is smart enough to figure out which kind of OS it's in. Second, if your DOMAIN_HOME variable is set correctly (probably something like ORACLE_HOME/user_projects/domains/bi), the runcat command will run, exporting a CSV that contains the information we need to run the next script, which does the actual crunching between environments. In Windows, the default output location is C:, in Linux, the /tmp directory.

script run

The tool will prompt you to enter your DOMAIN_HOME, should you not have one set in your environment.

enter domain home



Run security_audit.py in Target Environment

Next, in our target environment, being the OBIEE 12c instance we want to make sure contains the same app role mappings as in our lower environment, run the security_audit.py script once again, following the same steps as outlined above. Rename the CSV to something different than the file that was written in your lower environment, as we're going to need to put both of these guys in the same directory.

rename mapping files

So on that note, after you've renamed your security_mappings CSVs, move them to the same directory on your lower environment. We're simply renaming them so we don't clobber one version or the other, and for easy reference as to which file belongs to its corresponding environment.



Run security_compare.py in Lower Environment

At this point you should have two security_mappings.csv files (although, the names should be a bit changed at this point) in your lower environment. We're going to need them as inputs for the next part of the regression testing process. This next bit of code simply ensures consistency between the two environments. Once run, it will instantly display any catalog objects in your lower environment that contain any disparities in the way their application roles or permissions are mapped when compared against those in your target environment. As of this blog, there really is no good way to do this with any native OBIEE tool, that is aside from running the runcat reports yourself and doing the crunching. So let's do it!

Open a command prompt in your environment, taking note of where your two CSV files are located. In this example, we'll be using a Windows command prompt, with our files located directly off of the C:\ drive.

In your command prompt, navigate to the location of your .py files. On that note make sure you do not separate these from the other files in the security_audit directory. Flask will need the other files to render the resulting webpage. Back to it. In my example below, I've navigated to the security_audit directory, and then run the following:

run_script

And then watch the magic happen! Make sure you have pop-ups enabled if you're having trouble rendering the page. The script will auto-magically figure out the host name for your environment and run it there.

permissions_list

  1. Owner - this is the owner of the catalog object that is showing the variance in permissions assignment under:

  2. Name - this is the name of the object, as it is displayed in the catalog.

  3. Path - this is the path to the object in the web catalog

  4. ACL - these are the detailed permissions mappings based on each entity assigned to the particular object. There is an accompanying key to the left hand side to help you out.

  5. Permissions - detail level permissions mapped to each object by owner entity.


Summary


Having any trouble running the script? Please get in touch! I would also love to hear some feedback on how it might have helped you perform one task or the other. Feel free to use the comments section below for this or to report any issues.

Categories: BI & Warehousing

Node.js application writing to MongoDB – Kafka Streams findings read from Kafka Topic written to MongoDB from Node

Amis Blog - Mon, 2017-02-20 08:35

MongoDB is a popular, light weight, highly scalable, very fast and easy to use NoSQL document database. Written in C++, working with JSON documents (stored in binary format BSON), processing JavaScript commands using the V8 engine, MongoDB easily ties in into many different languages and platforms, one of which is Node.JS. In this article, I describe first of all how a very simple interaction between Node.JS and MongoDB can be implemented.

 

image

Then I do something a little more challenging: the Node.JS application consumes messages from an Apache Kafka topic and writes these messages to a MongoDB database collection, to make the results available for many clients to read and query. Finally I will show a little analytical query against the MongoDB collection, to retrieve some information we would not have been able to get from the plain Kafka Topic (although with Kafka Streams it just may be possible as well).

You will see the Mongo DB driver for Node.JS in action, as well as the kafka-node driver for Apache Kafka from Node.JS. All resources are in the GitHub Repo: https://github.com/lucasjellema/kafka-streams-running-topN.

Prerequisites

Node.JS is installed, as is MongoDB.

Run the MongoDB server. On Windows, the command is mongod, optionally followed by the dbpath parameter to specify in which directory the data files are to be stored

mongod --dbpath c:\node\nodetest1\data\

For the part where messages are consumed from a Kafka Topic, a running Apache Kafka Cluster is  available – as described in more detail in several previous articles such as https://technology.amis.nl/2017/02/13/kafka-streams-and-nodejs-consuming-and-periodically-reporting-in-node-js-on-the-results-from-a-kafka-streams-streaming-analytics-application/.

 

Getting Started

Start a new Node application, using npm init.

Into this application, install npm packages kafka-node en mongodb:

npm install mongodb –save

npm install kafka-node –save

This installs the two Node modules with their dependencies and adds them to the package.json

 

First Node Program – for Creating and Updating Two Static Documents

This simple Node.JS program uses the the mongodb driver for Node, connects to a MongoDB server running locally and a database called test. It then tries to update two documents in the top3 collection in the test database; if a document does not yet exist (based on the key which is the continent property) it is created. When the application is done running, two documents exist (and have their lastModified property set if they were updated).

var MongoClient = require('mongodb').MongoClient;
var assert = require('assert');

// connect string for mongodb server running locally, connecting to a database called test
var url = 'mongodb://127.0.0.1:27017/test';

MongoClient.connect(url, function(err, db) {
  assert.equal(null, err);
  console.log("Connected correctly to server.");
   var doc = {
        "continent" : "Europe",
         "nrs" : [ {"name":"Belgium"}, {"name":"Luxemburg"}]
      };
   var doc2 = {
        "continent" : "Asia",
         "nrs" : [ {"name":"China"}, {"name":"India"}]
      };
  insertDocument(db,doc, function() {
    console.log("returned from processing doc "+doc.continent);  
    insertDocument(db,doc2, function() {
      console.log("returned from processing doc "+doc2.continent);          
      db.close();
      console.log("Connection to database is closed. Two documents should exist, either just created or updated. ");
      console.log("From the MongoDB shell: db.top3.find() should list the documents. ");
    });
  });
});

var insertDocument = function(db, doc, callback) {
   // first try to update; if a document could be updated, we're done 
   console.log("Processing doc for "+doc.continent);
   updateTop3ForContinent( db, doc, function (results) {      
       if (!results || results.result.n == 0) {
          // the document was not updated so presumably it does not exist; let's insert it  
          db.collection('top3').insertOne( 
                doc
              , function(err, result) {
                   assert.equal(err, null);
                   callback();
                }
              );   
       }//if
       else {
         callback();
       }
 }); //updateTop3ForContinent
}; //insertDocument

var updateTop3ForContinent = function(db, top3 , callback) {
   db.collection('top3').updateOne(
      { "continent" : top3.continent },
      {
        $set: { "nrs": top3.nrs },
        $currentDate: { "lastModified": true }
      }, function(err, results) {
      //console.log(results);
      callback(results);
   });
};

The console output from the Node application:

image

The output on the MongoDB Shell:

image

Note: I have used db.top3.find() three times: before running the Node application, after it has ran once and after it has ran a second time. Note that after the second time, the lastModified property was added.

Second Node Program – Consume messages from Kafka Topic and Update MongoDB accordingly

This application registers as Kafka Consumer on the Topic Top3CountrySizePerContinent. Each message that is produced to that topic is consumed by the Node application and handled by function handleCountryMessage. This function parses the JSON message received from Kafka, adds a property continent derived from the key of the Kafka message, and calls the insertDocument function. This function attempts to update a record in the MongoDB collection top3 that has the same continent property value as the document passed in as parameter. If the update succeeds, the handling of the Kafka message is complete and the MongoDB collection  contains the most recent standings produced by the Kafka Streams application. If the update fails, presumably that happens because there is no record yet for the current continent. In that case, a new document is inserted for the continent.

image

/*
This program connects to MongoDB (using the mongodb module )
This program consumes Kafka messages from topic Top3CountrySizePerContinent to which the Running Top3 (size of countries by continent) is produced.

This program records each latest update of the top 3 largest countries for a continent in MongoDB. If a document does not yet exist for a continent (based on the key which is the continent property) it is inserted.

The program ensures that the MongoDB /test/top3 collection contains the latest Top 3 for each continent at any point in time.

*/

var MongoClient = require('mongodb').MongoClient;
var assert = require('assert');

var kafka = require('kafka-node')
var Consumer = kafka.Consumer
var client = new kafka.Client("ubuntu:2181/")
var countriesTopic = "Top3CountrySizePerContinent";


// connect string for mongodb server running locally, connecting to a database called test
var url = 'mongodb://127.0.0.1:27017/test';
var mongodb;

MongoClient.connect(url, function(err, db) {
  assert.equal(null, err);
  console.log("Connected correctly to MongoDB server.");
  mongodb = db;
});

var insertDocument = function(db, doc, callback) {
   // first try to update; if a document could be updated, we're done 
   updateTop3ForContinent( db, doc, function (results) {      
       if (!results || results.result.n == 0) {
          // the document was not updated so presumably it does not exist; let's insert it  
          db.collection('top3').insertOne( 
                doc
              , function(err, result) {
                   assert.equal(err, null);
                   console.log("Inserted doc for "+doc.continent);
                   callback();
                }
              );   
       }//if
       else {
         console.log("Updated doc for "+doc.continent);
         callback();
       }
 }); //updateTop3ForContinent
}; //insertDocument

var updateTop3ForContinent = function(db, top3 , callback) {
   db.collection('top3').updateOne(
      { "continent" : top3.continent },
      {
        $set: { "nrs": top3.nrs },
        $currentDate: { "lastModified": true }
      }, function(err, results) {
      //console.log(results);
      callback(results);
   });
};

// Configure Kafka Consumer for Kafka Top3 Topic and handle Kafka message (by calling updateSseClients)
var consumer = new Consumer(
  client,
  [],
  {fromOffset: true}
);

consumer.on('message', function (message) {
  handleCountryMessage(message);
});

consumer.addTopics([
  { topic: countriesTopic, partition: 0, offset: 0}
], () => console.log("topic "+countriesTopic+" added to consumer for listening"));

function handleCountryMessage(countryMessage) {
    var top3 = JSON.parse(countryMessage.value);
    var continent = new Buffer(countryMessage.key).toString('ascii');
    top3.continent = continent;
    // insert or update the top3 in the MongoDB server
    insertDocument(mongodb,top3, function() {
      console.log("Top3 recorded in MongoDB for "+top3.continent);  
    });

}// handleCountryMessage

Running the application produces the following output.

Producing Countries:

SNAGHTML44a937e

Producing Streaming Analysis – Running Top 3 per Continent:

SNAGHTML44b2c82

Processing Kafka Messages:

image

Resulting MongoDB collection:

SNAGHTML44bf675

And after a little while, here is the latest situation for Europe and Asia in the MongoDB collection :

image

Resulting from processing the latest Kafka Stream result messages:

image

 

 

Querying the MongoDB Collection

The current set of top3 documents – one for each continent – stored in MongoDB can be queried, using MongoDB find and aggregation facilities.

One query we can perform is to retrieve the top 5 largest countries in the world. Here is the query that gives us that insight. First it creates a single record per country (using unwind to join the nrs collection in each top3 document). The countries are then sorted by the size of each country (descending) and the first 5 of the sort result are retained. These five are then projected into a nicer looking output document that only contains continent, country and area fields.

db.top3.aggregate([ {$project: {nrs:1}},{$unwind:’$nrs’}, {$sort: {“nrs.size”:-1}}, {$limit:5}, {$project: {continent:’$nrs.continent’, country:’$nrs.name’, area:’$nrs.size’ }}])

db.top3.aggregate([ 
   {$project: {nrs:1}}
  ,{$unwind:'$nrs'}
  , {$sort: {"nrs.size":-1}}
  , {$limit:5}
  , {$project: {continent:'$nrs.continent', country:'$nrs.name', area:'$nrs.size' }}
])

image

(And because no continent has its number 3 country in the top 4 of this list, we can be sure that this top 5 is the actual top 5 of the world)

 

Resources

A very good read – although a little out of date – is this tutorial on 1st and 2nd steps with Node and Mongodb: http://cwbuecheler.com/web/tutorials/2013/node-express-mongo/ 

MongoDB Driver for Node.js in the official MongoDB documentation: https://docs.mongodb.com/getting-started/node/client/ 

Kafka Connect for MongoDB – YouTube intro – https://www.youtube.com/watch?v=AF9WyW4npwY 

Combining MongoDB and Apache Kafka – with a Java application talking and listening to both: https://www.mongodb.com/blog/post/mongodb-and-data-streaming-implementing-a-mongodb-kafka-consumer 

Tutorials Point MongoDB tutorials – https://www.tutorialspoint.com/mongodb/mongodb_sort_record.htm 

Data Aggregation with Node.JS driver for MongoDB – https://docs.mongodb.com/getting-started/node/aggregation/

The post Node.js application writing to MongoDB – Kafka Streams findings read from Kafka Topic written to MongoDB from Node appeared first on AMIS Oracle and Java Blog.

Why Focused Learning Is Better For Your Career

Complete IT Professional - Mon, 2017-02-20 05:00
In this article, I’ll explain what I mean by “focused learning” and why it’s a good idea for your career. What is Focused Learning? When we want to learn something, we tend to research on it. We look for websites, articles, videos on the topic. Or we enrol in a university course, if it’s career […]
Categories: Development

12cR1 RAC Posts -- 6 :Running the Cluster Verification Utility

Hemant K Chitale - Mon, 2017-02-20 03:45
With a successful RAC Cluster, running the Cluster Verification Utility

[oracle@collabn1 ~]$ cluvfy stage -post crsinst -n collabn1,collabn2

Performing post-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "collabn1"


Checking user equivalence...
User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.78.0"
Node connectivity passed for subnet "192.168.78.0" with node(s) collabn2,collabn1
TCP connectivity check passed for subnet "192.168.78.0"


Check: Node connectivity using interfaces on subnet "172.16.100.0"
Node connectivity passed for subnet "172.16.100.0" with node(s) collabn1,collabn2
TCP connectivity check passed for subnet "172.16.100.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.78.0".
Subnet mask consistency check passed for subnet "172.16.100.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "172.16.100.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "172.16.100.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.

Checking whether the ASM filter driver is active and consistent on all nodes
ASM filter driver library is not installed on any of the cluster nodes.
ASM filter driver configuration was found consistent across all the cluster nodes.
Time zone consistency check passed

Checking Cluster manager integrity...


Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.

Cluster manager integrity check passed


UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations


UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations

Default user file creation mask check passed

Checking cluster integrity...


Cluster integrity check passed


Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations


Checking daemon liveness...
Liveness check passed for "CRS daemon"

Checking OCR config file "/etc/oracle/ocr.loc"...

OCR config file "/etc/oracle/ocr.loc" check successful


Disk group for ocr location "+OCRVOTE/collabn-cluster/OCRFILE/registry.255.934671619" is available on all the nodes


Checking OCR backup location "/u01/app/12.1.0/grid/cdata/collabn-cluster"

OCR backup location "/u01/app/12.1.0/grid/cdata/collabn-cluster" check passed

NOTE:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.

OCR integrity check passed

Checking CRS integrity...

Clusterware version consistency passed.

CRS integrity check passed

Checking node application existence...

Checking existence of VIP node application (required)
VIP node application check passed

Checking existence of NETWORK node application (required)
NETWORK node application check passed

Checking existence of ONS node application (optional)
ONS node application check passed


Checking Single Client Access Name (SCAN)...

Checking TCP connectivity to SCAN listeners...
TCP connectivity to SCAN listeners exists on all cluster nodes

Checking name resolution setup for "collabn-cluster-scan.racattack"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Checking SCAN IP addresses...
Check of SCAN IP addresses passed

Verification of SCAN VIP and listener setup passed

Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed

WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.

OLR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed

User "oracle" is not part of "root" group. Check passed
Oracle Clusterware is installed on all nodes.
CTSS resource check passed
Query of CTSS for time offset passed

CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Post-check for cluster services setup was successful.
[oracle@collabn1 ~]$
[oracle@collabn1 ~]$ su
Password:
[root@collabn1 oracle]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1688
Available space (kbytes) : 407880
ID : 827167720
Device/File Name : +OCRVOTE
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 oracle]#
[root@collabn1 oracle]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1028
Available space (kbytes) : 408540
ID : 1014277103
Device/File Name : /u01/app/12.1.0/grid/cdata/collabn1.olr
Device/File integrity check succeeded

Local registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 oracle]#



[root@collabn2 oracle]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 796
Available space (kbytes) : 408772
ID : 1896774486
Device/File Name : /u01/app/12.1.0/grid/cdata/collabn2.olr
Device/File integrity check succeeded

Local registry integrity check succeeded

Logical corruption check succeeded

[root@collabn2 oracle]#


The "cluvfy stage -post crsinst" command as the Grid Infrastructure user checks the status of all components after the installation and configuration of the Cluster.  Note : This does not and cannot check any database (RAC or non-RAC) that is created.

The "ocrcheck" commands by root are to check the integrity of the Cluster Registry and Local Registry.  (You would run "ocrcheck -local" on each node of the Cluster).

.
.
.

Categories: DBA Blogs

Links for 2017-02-19 [del.icio.us]

Categories: DBA Blogs

MobaXterm 10.0

Tim Hall - Mon, 2017-02-20 01:39

See Updates!

MobaXterm 10.0 was released a couple of days ago. As well as the usual bug fixes there are a number of enhancements and a new flatter look and feel.

Downloads and changelog in the usual places.

Happy upgrading!

Cheers

Tim…

Update: McAfee is listing this as containing the “Artemis!10A4D2BC47D8” trojan. I’ve backed out to 9.4 and contacted Mobatek, who tell me it’s a false positive. I’ve spoken to others who say their AV lists it as good. I’m going to try and contact McAfee to get them to sort their listing.

Update 2: My company have filed a potential false-positive with McAfee.

Update 3: Mobatek have pulled this version until they can work out this false positive with McAfee, then reissue it later.

MobaXterm 10.0 was first posted on February 20, 2017 at 8:39 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

RMAN unused block compression against TABLE drop vs TABLE with lob column compression.

Tom Kyte - Sun, 2017-02-19 19:06
Hi Tom, I am just trying to understand RMAN unused block compression against TABLE with LOB column compression vs TABLE drop. Consider LOB data for column 'DATA' in a table 'EMPLOYEES' under SCOTT schema. - lob segment SELECT SEGMENT_NAME FR...
Categories: DBA Blogs

Index column order and Partitioned table

Tom Kyte - Sun, 2017-02-19 19:06
We have a table with few hundred millions of rows. We perform INSERT, DELETE, SEARCH operation on this table. Column ID is unique and column CODE has very few distinct values (lets say 100 distinct values). So we partition table by LIST CODE column...
Categories: DBA Blogs

How to speed up slow unicode migration of a table with xmltype columns

XTended Oracle SQL - Sun, 2017-02-19 18:46

Recently I have had an issue with slow unicode migration of the database upgraded from 10g to 12.1.0.2. The main problem was a table with xmltype: we spent about 4 hours for this table(~17GB) during test migration, though all other tables (~190GB) migrated just for about 20 minutes.
We used DMU(Database Migration Assistant for Unicode), and the root cause of the problem was update statement generated by DMU for this table:

update  /*+ PARALLEL(A,16)*/ "RRR"."T_XMLDATA" A  set A."SYS_NC00011$" = SYS_OP_CSCONV(A."SYS_NC00011$", 'AL16UTF16')

“SYS_NC00011$” was internal hidden CLOB column used to store XMLTYPE. As you can see DMU added PARALLEL hint, but though oracle can use parallel dml for xmltype since 12.1.0.1, we can’t use it because of its’ limitations:

Changes in Oracle Database 12c Release 1 (12.1.0.1) for Oracle XML DB

Parallel DML Support for XMLType
Support for parallel DML has been improved for XMLType storage model binary XML using SecureFiles LOBs. The performance and scalability have been improved for both CREATE TABLE AS SELECT and INSERT AS SELECT.

Restrictions on Parallel DML

Parallel DML can be done on tables with LOB columns provided the table is partitioned. However, intra-partition parallelism is not supported.

For non-partitioned tables with LOB columns, parallel INSERT operations are supported provided that the LOB columns are declared as SecureFiles LOBs. Parallel UPDATE, DELETE, and MERGE operations on such tables are not supported.

Btw, Oracle didn’t support parallel dml for xmltype on previous releases:

No Parallel DML for XMLType – DML operations on XMLType data are always performed in serial. Parallel DML is not supported for XMLType. (Parallel query and DDL are supported for XMLType.)

So I had to use manual parallelization:
1. Monitor “Convert application tables” step through “View Table Conversion progress” and press “Stop” button during conversion of this table.
2. Create table with ROWIDs of this table and split them into 16 groups:

create table tmp_rids as 
select rowid rid, ntile(16)over(order by rowid) grp 
from t_xmldata;

3. Execute

ALTER SYSTEM SET EVENTS '22838 TRACE NAME CONTEXT LEVEL 1,FOREVER'; 

to avoid “ORA-22839: Direct updates on SYS_NC columns are disallowed”
4. Start 16 sessions and each of them have to update own part:

update t_xmldata A 
set A."SYS_NC00011$" = SYS_OP_CSCONV(A."SYS_NC00011$", 'AL16UTF16') 
where rowid in (select rid from tmp_rids where grp=&grp);
commit;

5. Disable event 22838:

ALTER SYSTEM SET EVENTS '22838 TRACE NAME CONTEXT OFF'; 

6. Open “View Table Conversion progress” window, click on this table and change “Retry” to “Skip” option for the update step.

This simple method allowed to make unicode migration about 16 times faster.

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator