Feed aggregator

How to run OpenTSDB with Google Bigtable

Pythian Group - Mon, 2016-03-14 12:49

In a previous post (OpenTSDB and Google Cloud Bigtable) we discussed OpenTSDB, an open source distributed database specifically designed for storing timeseries data. We also explained how OpenTSDB relies on Apache HBase for a reliable and scalable data backend. However, deployment and administration of an HBase cluster is not a trivial task, as it requires a full Hadoop setup. This means that it takes a big data engineer (or better a team of them) to plan for the cluster sizing, provision the machines and setup the Hadoop nodes, configure all services and tune them for optimal performance. If this is not enough, Operations teams have to constantly monitor the cluster, deal with hardware and service failures, perform upgrades, backup regularly, and a ton of other tasks that make maintenance of a Hadoop cluster and OpenTSDB a challenge for most organizations.

With the release of Google Bigtable as a cloud service and its support for the HBase API, it was obvious that if we managed to integrate OpenTSDB with Google Bigtable, we would enable more teams to have access to the powerful functionality of OpenTSDB by removing the burden from maintaining an HBase cluster.

Nevertheless, integration of OpenTSDB with Bigtable was not as seamless as dropping a few jars in its release directory. This happened because the OpenTSDB developers went over and above the standard HBase libraries, by implementing their very own asynchbase library. Asynchbase is a fully asynchronous, non-blocking, thread-safe, high-performance HBase API. And no one can put it better than the asynchbase developers themselves who claim that ‘This HBase client differs significantly from HBase’s client. Switching to it is not easy as it requires one to rewrite all the code that was interacting with any HBase API.’

This meant that integration with Google Bigtable required OpenTSDB to switch back to the standard HBase API. We saw the value of such an effort here at Pythian and set about developing this solution.

The asyncbigtable library

Today, we are very happy to announce the release of the asyncbigtable library. The asyncbigtable library is a 100% compatible implementation of the great asynchbase library that can be used as a drop in replacement and enable OpenTSDB to use Google Bigtable as a storage backend.

Thanks to support from the OpenTSDB team, the asyncbigtable code is hosted in the OpenTSDB GitHub repository.

Challenges

To create asyncbigtable we had to overcome two great challenges. The first one was that OpenTSDB assumes that the underlying library (until now asynchbase) performs asynchronous and non-blocking operations. On the other hand, the standard HBase API only supports synchronous and blocking calls. As a workaround for this, we used the BufferedMutator  implementation that collects all Mutation operations in a buffer and performs them in batches, allowing for mutations with an extremely low latency.

The second challenge stemmed from the fact that the OpenTSDB project has a very limited set of jar dependencies, that are explicitly defined in Makefiles. Contrary to this spartan approach, HBase and Bigtable client libraries have a significant number of transitive dependencies. Since, adding those dependencies one-by-one in the OpenTSDB build process would complicate its dependency management, we decided to  package all asyncbigtable dependencies in an uber-jar using the Maven assembly plugin. Therefore, building OpenTSDB with asyncbigtable support is now as simple as downloading a single beefy jar.

Build stepsBefore you start

Before you build OpenTSDB with Google Bigtable support, you must complete the following required steps:

  1. Create a Google Bigtable cluster (https://cloud.google.com/bigtable/docs/creating-cluster)
  1. Install HBase shell with access to the Google Bigtable cluster (https://cloud.google.com/bigtable/docs/installing-hbase-shell)
  1. Download and install the required tools for compiling OpenTSDB from source (http://opentsdb.net/docs/build/html/installation.html#compiling-from-source)
Build and run OpenTSDB
  1. Clone and build the modified source code from the Pythian github repository:

git clone -b bigtable git@github.com:pythian/opentsdb.git
cd opentsdb
sh build-bigtable.sh

  1. Create OpenTSDB tables

OpenTSDB provides a script that uses HBase shell to create its tables.  To create the tables run the following command:
env COMPRESSION=NONE HBASE_HOME=/path/to/hbase-1.1.2 \
./src/create_table.sh

  1. Run OpenTSDB

export HBASE_CONF=/path/to/hbase-1.1.2/conf
mkdir -p <tmp_dir>
./build/tsdb tsd --port=4242 --staticroot=build/staticroot \
--cachedir=<tmp_dir>

Future work

By all means our work on asyncbigtable does not stop here. We are putting great effort towards improving the library to achieve the high quality standards of the rest of OpenTSDB code. Our first priority is to test the library against most real world scenarios and achieve the highest quality. In the future, we plan to benchmark the performance of OpenTSDB with Bigtable and compare how it competes against HBase.

We are also working on building a true asynchronous implementation of the asyncbigtable library by integrating deeper with the Google Bigtable API.

Acknowledgements

We would like to thank the OpenTSDB developers (Benoît Sigoure and Chris Larsen) for their brilliant work in building such great software and for embracing the asyncbigtable library. Their insights and code contributions helped us deal with some serious issues. Also, we would like to thank the Google Cloud Bigtable team because they expressed genuine interest in this project and they were very generous in providing us with cloud infrastructure and excellent support.

Categories: DBA Blogs

#EMd360 … OEM health checks made easy

DBASolved - Mon, 2016-03-14 12:01

Oracle Enterprise Manager 12c is a great tool! Now that 13c is out, it is getting even better. This post however it not really about Oracle Enterprise Manager, rather than a quick and simple health check tool that I’ve put together. With the help of of some really cool co-workers (Carlos Sierra and Mauro Pagano), I’ve put together a small diagnostic tool call EMd360.

EMd360 stands for Enterprise Manager d360. The concept behind this tool is just like other tools that have been released with the 360 concept (edb360 and sqld360); to provide a quick and easy approach to checking an environment. As with edb360 and sqld360, EMd360 is a completely free tool for anyone to use.

So, why is there a need for EMd360? It is quite simple, there are so many things that go into OEM and you get so much out of OEM it is overwhelming. As a consultant, I’ve been asked to review a lot of OEM architectures and the associated performance. A lot of this information is in the OMR and often time I’m using other tools like REPVFY and OMSVFY, plus a handful of scripts. I’ve decided to make my life (and hopefully yours) a bit easier by building EMd360.

The first (base) release of EMd360 is now live on GitHub (https://github.com/dbasolved/EMd360.git). Go and get it! Test it out!

Download

If you are interested in trying out EMd360, you can download it from GitHub.

Instructions

Download EMd360 from GitHub as a zip file
Unzip EMd360-master.zip on the OMR server and navigate to the directory where you unzipped it
Connect to the OMR using SQL*Plus and execute @emd360.sql

Options

The @emd360.sql script take two variables. You will be prompted for them if not passed on the sql command line.

Variable 1 – Server name of the Oracle Management Service (without domain names)
Variable 2 – Oracle Management Repository name (database SID)

Example:

$ sqlplus / as sysdba
SQL> @emd360 pebble oemrep

Let me know your thoughts and if there is something you would like to see in it. Every environment is different and there maybe something you are looking for that is not provided. Let me know via email or blog comment and I’ll try to get it added in the next release.

Enjoy!!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

#EMd360 … OEM health checks made easy

DBASolved - Mon, 2016-03-14 12:01

Oracle Enterprise Manager 12c is a great tool! Now that 13c is out, it is getting even better. This post however it not really about Oracle Enterprise Manager, rather than a quick and simple health check tool that I’ve put together. With the help of of some really cool co-workers (Carlos Sierra and Mauro Pagano), I’ve put together a small diagnostic tool call EMd360.

EMd360 stands for Enterprise Manager d360. The concept behind this tool is just like other tools that have been released with the 360 concept (edb360 and sqld360); to provide a quick and easy approach to checking an environment. As with edb360 and sqld360, EMd360 is a completely free tool for anyone to use.

So, why is there a need for EMd360? It is quite simple, there are so many things that go into OEM and you get so much out of OEM it is overwhelming. As a consultant, I’ve been asked to review a lot of OEM architectures and the associated performance. A lot of this information is in the OMR and often time I’m using other tools like REPVFY and OMSVFY, plus a handful of scripts. I’ve decided to make my life (and hopefully yours) a bit easier by building EMd360.

The first (base) release of EMd360 is now live on GitHub (https://github.com/dbasolved/EMd360.git). Go and get it! Test it out!

Download

If you are interested in trying out EMd360, you can download it from GitHub.

Instructions

Download EMd360 from GitHub as a zip file
Unzip EMd360-master.zip on the OMR server and navigate to the directory where you unzipped it
Connect to the OMR using SQL*Plus and execute @emd360.sql

Options

The @emd360.sql script take two variables. You will be prompted for them if not passed on the sql command line.

Variable 1 – Server name of the Oracle Management Service (without domain names)
Variable 2 – Oracle Management Repository name (database SID)

Example:

$ sqlplus / as sysdbaSQL> @emd360 pebble oemrep

Let me know your thoughts and if there is something you would like to see in it. Every environment is different and there maybe something you are looking for that is not provided. Let me know via email or blog comment and I’ll try to get it added in the next release.

Enjoy!!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

OGh DBA Day – Call for Papers!

Marco Gralike - Mon, 2016-03-14 11:41
While part of this packed event is already underway (SQL Celebration Day bit), the DBA…

Changing SOA properties via WLST

Marc Kelderman - Mon, 2016-03-14 09:30


Hereby a script to change some properties for SOA Suite. These are some generic settings such as:
  • soa-infra
  • AuditLevelGlobalTxMaxRetry
  • DisableCompositeSensors
  • DisableSpringSESensors
  • mediator
  • AuditLevel
  • bpel
  • AuditLevel
  • SyncMaxWaitTime
  • Recovery Schedule Config
import java.io.IOException;
import java.net.MalformedURLException;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import javax.management.Attribute;
import javax.management.MBeanServerConnection;
import javax.management.ObjectName;
import javax.management.Query;
import javax.management.QueryExp;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.OpenDataException;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

connect('weblogic', 'Welcome1', 't3://myhost:7001')
domainRuntime()

#
# soa-infra
#
SOAInfraConfigobj = ObjectName('oracle.as.soainfra.config:Location=MS1,name=soa-infra,type=SoaInfraConfig,Application=soa-infra')

# Off, Production and Development
SOAattribute = Attribute('AuditLevel', 'Production')
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)

print '*** soa-infra: set AuditLevel', mbs.getAttribute(SOAInfraConfigobj, 'AuditLevel')

SOAattribute = Attribute('GlobalTxMaxRetry', 0)
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)
print '*** soa-infra: set GlobalTxMaxRetry', mbs.getAttribute(SOAInfraConfigobj, 'GlobalTxMaxRetry')

SOAattribute = Attribute('DisableCompositeSensors', true)
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)
print '*** soa-infra: set DisableCompositeSensors', mbs.getAttribute(SOAInfraConfigobj, 'DisableCompositeSensors')

SOAattribute = Attribute('DisableSpringSESensors', true)
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)
print '*** soa-infra: set DisableSpringSESensors', mbs.getAttribute(SOAInfraConfigobj, 'DisableSpringSESensors')

#
# Mediator
#
SOAInfraConfigobj = ObjectName('oracle.as.soainfra.config:Location=MS1,name=mediator,type=MediatorConfig,Application=soa-infra')

SOAattribute = Attribute('AuditLevel', 'Inherit')
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)
print '*** mediator: set AuditLevel', mbs.getAttribute(SOAInfraConfigobj, 'AuditLevel')

#
# BPEL
#

SOAInfraConfigobj = ObjectName('oracle.as.soainfra.config:Location=MS1,name=bpel,type=BPELConfig,Application=soa-infra')

SOAattribute = Attribute('SyncMaxWaitTime', 120)
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)

print '*** bpel: set SyncMaxWaitTime', mbs.getAttribute(SOAInfraConfigobj, 'SyncMaxWaitTime')

# AuditLevel
#   off: 0
#   inherit: 1
#   minimal: 2
#   production: 3
#   development: 4
#   onerror: 5

SOAattribute = Attribute('AuditLevel', 'production')
mbs.setAttribute(SOAInfraConfigobj, SOAattribute)
print '*** bpel: set AuditLevel', mbs.getAttribute(SOAInfraConfigobj, 'AuditLevel')

#javax.management.ObjectName
SOAInfraConfigobj = ObjectName('oracle.as.soainfra.config:Location=mwpton-MS1,name=bpel,type=BPELConfig,Application=soa-infra')

#javax.management.openmbean.CompositeDataSupport
rec_config_obj  = mbs.getAttribute(SOAInfraConfigobj, 'RecoveryConfig')

rec_keySet = rec_config_obj.getCompositeType().keySet()
rec_keys = rec_keySet.toArray()
rec_keyitems = [ rec_key for rec_key in rec_keys ]

#javax.management.openmbean.CompositeDataSupport
rec_cluster_obj = rec_config_obj.get('ClusterConfig')
rec_recurrr_obj = rec_config_obj.get('RecurringScheduleConfig')
rec_startup_obj = rec_config_obj.get('StartupScheduleConfig')

#
# StartupScheduleConfig
#
cnt = 0

# java.util.Collections.UnmodifiableSet
keySet = rec_startup_obj.getCompositeType().keySet()

# array
keys = keySet.toArray()

# list
keyitems = [ key for key in keys ]

# array
values = rec_startup_obj.getAll(keyitems)

for key in keys:
  if key == 'maxMessageRaiseSize':
    values[cnt] = 0
    print '*** bpel: set RecurringScheduleConfig:maxMessageRaiseSize ' + key + ' to value ' + str(values[cnt])
  cnt = cnt + 1

#javax.management.openmbean.CompositeDataSupport
new_rec_startup_obj = CompositeDataSupport(rec_startup_obj.getCompositeType(), keyitems, values)

#
# RecurringScheduleConfig
#
cnt = 0

keySet = rec_recurrr_obj.getCompositeType().keySet()
keys = keySet.toArray()
keyitems = [ key for key in keys ]
values = rec_recurrr_obj.getAll(keyitems)

for key in keys:
  if key == 'maxMessageRaiseSize':
    values[cnt] = 0
    print '*** bpel: set RecurringScheduleConfig:maxMessageRaiseSize ' + key + ' to value ' + str(values[cnt])
  if key == 'startWindowTime':
    values[cnt] = "00:00"
    print '*** bpel: set RecurringScheduleConfig:startWindowTime ' + key + ' to value ' + str(values[cnt])
  if key == 'stopWindowTime':
    values[cnt] = "00:00"
    print '*** bpel: set RecurringScheduleConfig:stopWindowTime ' + key + ' to value ' + str(values[cnt])
  cnt = cnt + 1

#javax.management.openmbean.CompositeDataSupport
new_rec_recurrr_obj = CompositeDataSupport(rec_recurrr_obj.getCompositeType(), keyitems, values)

pyMap = { "ClusterConfig":rec_cluster_obj, "RecurringScheduleConfig":new_rec_recurrr_obj, "StartupScheduleConfig":new_rec_startup_obj }
javaMap = java.util.HashMap()
for k in pyMap.keys():
  javaMap[k] = pyMap[k]

new_rec_config_obj = CompositeDataSupport(rec_config_obj.getCompositeType(), javaMap)

#javax.management.Attribute
SOAattribute = Attribute('RecoveryConfig', new_rec_config_obj)

mbs.setAttribute(SOAInfraConfigobj, SOAattribute)

Bug in Ointment: ORA-600 in Online Datafile Move

Pythian Group - Mon, 2016-03-14 09:02

Instead of using ‘fly in ointment’, I have used ‘Bug in Ointment’ because in this prolonged Australian summer, my backyard is full of bugs (to the sheer delight of my bug-loving son, at the same time causing much anxiety among the rest of us). When your backyard is full of bugs and you get bugs in a database, it’s only natural to customize the idioms.

Oracle 12c has been warming up the hearts of database aficionados in various ways with its features. One of the celebrated features is the online datafile moving and renaming. Lots has been written about it and suffice to say that we don’t need any down time in order to move, rename, or copy the data files anymore. It’s an online operation with zero down time incurring a slight performance overhead.

I was playing with this feature on my test system with Oracle 12.1 on OEL 6, and when moving a datafile in a pluggable database I got this error:

ORA-600 [kpdbGetOperLock-incompatible] from ALTER PLUGGABLE DATABASE .. DATAFILE ALL ONLINE

Well, I tried searching for this error using ORA-600 look up tool, but it didn’t turn up anything and simply informed me:

An Error document for ORA-600 [kpdbgetoperlock-incompatible] is not registered with the tool.

Digging more in My Oracle Support pulled out following associated bug:

Bug 19329654 – ORA-600 [kpdbGetOperLock-incompatible] from ALTER PLUGGABLE DATABASE .. DATAFILE ALL ONLINE (Doc ID 19329654.8)

The good news was that the bug was fixed in the 12.1.0.2.1 (Oct 2014) Database Patch Set Update. And it’s true, after applying this PSU, everything was hunky-dory.

Categories: DBA Blogs

Monitoring Oracle Database with Zabbix

Gerger Consulting - Mon, 2016-03-14 08:14

Attend our free webinar and learn how you can use Zabbix, the open source monitoring solution, to monitor your Oracle Database instances? The webinar is presented by Oracle ACE and Certified Master Ronald Rood.


About the Webinar:

Enterprise IT is moving to the Cloud. With tens, hundreds even thousands of servers in the Cloud, monitoring the uptime, performance and quality of the Cloud infrastructure becomes a challenge that traditional monitoring tools struggle to solve. Enter Zabbix. Zabbix is a low footprint, low impact, open source monitoring tool that provides various notification types and integrates easily with your ticketing system. During the webinar, we'll cover the following topics:

  • Installation and configuration of Zabbix in the Cloud
  • Monitoring Oracle databases using Zabbix
  • How to use Zabbix templates to increase the quality and efficiency of your monitoring setup
  • How to setup Zabbix for large and remote networks
  • How to trigger events in Zabbix
  • Graphing with Zabbix
  • Categories: Development

    ORDS and PL/SQL

    Kris Rice - Mon, 2016-03-14 07:56
    Seems I've never posted about PL/SQL based REST endpoints other than using the OWA toolkit.  Doing the htp.p manually can give the control over every aspect of the results however there is an easier way. With PL/SQL based source types, the ins and outs can be used directly without any additional programming.  Here's a simple example of an anonymous block doing about as little as possible but

    Oracle Mobile Cloud Service Update (v1.2): New Features and Enhancements

    Oracle Mobile Cloud Service (MCS) provides the services you need to develop a comprehensive strategy for mobile app development and delivery. It provides everything you need to establish an...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    KeePass 2.32

    Tim Hall - Mon, 2016-03-14 06:33

    KeePass 2.32 has been released. You can download it from here.

    You can read about how I use KeePass and KeePassX2 on my Mac, Windows and Android devices here.

    Cheers

    Tim…

    KeePass 2.32 was first posted on March 14, 2016 at 12:33 pm.
    ©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

    PeopleSoft on the Oracle Cloud – what does it mean?

    Duncan Davies - Mon, 2016-03-14 06:00

    There have been a few announcements over the last couple of weeks about the Oracle Public Cloud. But what does it actually mean for the PeopleSoft community?

    What is Oracle Public Cloud?PeopleSoft in the Oracle Cloud

    The Oracle Public Cloud is Oracle’s competitor to the Infrastructure as a Service (IaaS) providers that have swiftly risen to create a whole industry that didn’t exist 10 years ago. Because they’re the market leader (by far) everyone automatically thinks of Amazon, however Microsoft Azure, Google Compute and Rackspace are also players in the market.

    As PeopleSoft adopts more SaaS-like features (new UI, incremental updates etc) companies have started to move their infrastructure from their own data-centres to the cloud. For many companies this makes good business sense, however rather than have customers going to a 3rd party provider Oracle would rather provide the cloud service themselves. Obviously this is better for Oracle, however the customer benefits too (retaining a single vendor, and Oracle can potentially optimise their applications for their own cloud better than they can for Cloud infrastructure belonging to other vendors). There may also be cost savings for the customer, however I haven’t looked at pricing yet.

    Doesn’t Oracle already do Hosting?

    Yes, Oracle has long had a service that will host infrastructure on your behalf – Oracle On Demand. This is more of an older-style ASP (Application Service Provider). You’re more likely to be on physical hardware without much in the way of flexibility/scalability and tied into a long-term hosting contract, so the Oracle Public Cloud is a major step forwards in a number of ways.

    How will Oracle Public Cloud be better?

    I attended a couple of workshops on this last week and it looks very promising. It has all the attributes required for it to be properly classed as ‘Cloud’:

    • subscription pricing,
    • elasticity of resources (so you can scale instances according to demand),
    • resilience of data centres (so, if you’re based in the UK you might be looking at the Slough data centre, however there are two ‘availability zones’ within Slough so if one gets hit by an outage you’ll still be able to connect to the other one)

    Interestingly, it also includes several ‘Database as a Service’ offerings, each offering increasing levels of performance. With this model you don’t need to worry about the virtual machine, operation system etc that your database runs on, you receive access to a database and leave the maintenance to others. You would still need to have your other tiers on the IaaS offerings.

    This opens up the possibility of multiple tiers of Cloud service:

    1. Just the Infrastructure (client does all the database and application admin)
    2. DBaaS (client has other tiers on IaaS, but does not do DB admin)
    3. Full Cloud solution (uses Oracle Cloud and a partner to do all administration)
    How can I best take advantage?

    The best time to move is probably at the same time as an upgrade. Upgrades normally come with a change in some of the hardware (due to the supported platforms changing) so moving to the cloud allows the hardware to change without any up-front costs.

    PeopleSoft 9.2 and the more recent PeopleTools versions have a lot of features that were built for the Cloud, so by running it on-premises you’re not realising the full capabilities of your investment.

    We’d recommend you try using the Cloud for your Dev and Test instances first, before leaping in with Production at a later date. Oracle have tools to help you migrate on-premises instances to their Cloud. (At this point – Mar 2016 – we have not tested these tools.)

    What will the challenges be?

    The first challenge is “how do I try it?”. This is pretty straightforward, in that you get a partner to demonstrate to you, or can get yourself an Oracle Public Cloud account and then provision a PeopleSoft instance using one of the PUM images as a demo. This would work fine to look at new functionality, or as a conference room pilot.

    One of the biggest challenges is likely to be security – not the security of Oracle’s cloud, but securing your PeopleSoft instances which previously might have been only available within your corporate LAN. If you need assistance with this speak to a partner with experience using Oracle Public Cloud.


    Oracle Midlands : Event #14

    Tim Hall - Mon, 2016-03-14 05:33

    Tomorrow is Oracle Midlands Event #14.

    om14

    Please show your support and come along. It’s free thanks to the sponsorship by RedStackTech.

    Cheers

    Tim…

    Oracle Midlands : Event #14 was first posted on March 14, 2016 at 11:33 am.
    ©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

    ASO Slice Clears – How Many Members?

    Rittman Mead Consulting - Mon, 2016-03-14 05:00

    Essbase developers have had the ability to (comparatively) easily clear portions of our ASO cubes since version 11.1.1, getting away from fiddly methods involving manually contra-ing existing data via reports and rules files, making incremental loads substantially easier.

    Along with the official documentation in the TechRef and DBAG, there are a number of excellent posts already out there that explain this process and how to effect “slice clears” in detail (here and here are just two I’ve come across that I think are clear and helpful). However, I had a requirement recently where the incremental load was a bit more complex than this. I am sure people must have fulfilled in the same or a very similar way, but I could not find any documentation or articles relating to it, so I thought it might be worth recording.

    For the most part, the requirements I’ve had in this area have been relatively straightforward—(mostly) financial systems where the volatile/incremental slice is typically a months-worth (or quarters-worth) of data. The load script will follow this sort of sequence:

    • [prepare source data, if required]
    • Perform a logical clear
    • Load data to buffer(s)
    • Load buffer(s) to new database slice(s)
    • [Merge slices]

    With the last stage being run here if processing time allows (this operation precludes access to the cube) or in a separate routine “out of hours” if not.

    The “logical clear” element of the script will comprise a line like (note: the lack of a “clear mode” argument means a logical clear; only a physical clear needs to be specified explicitly):

    alter database ‘Appname‘.’DBName‘ clear data in region ‘{[Jan16]}’

    or more probably

    alter database ‘Appname‘.’DBName‘ clear data in region ‘{[&CurrMonth]}’

    i.e., using a variable to get away from actually hard coding the member values to clear. For separate year/period dimensions, the slice would need to be referenced with a CrossJoin:

    alter database ‘Appname‘.’DBName‘ clear data in region ‘Crossjoin({[Jan]},{[FY16]})’ alter database ‘${Appname}’.’${DBName}’ clear data in region ‘Crossjoin({[&{CurrMonth]},{[&CurrYear]})’

    which would, of course, fully nullify all data in that slice prior to the load. Most load scripts will already be formatted so that variables would be used to represent the current period that will potentially be used to scope the source data (or in a BSO context, provide a FIX for post-load calculations), so using the same to control the clear is an easy addition.

    Taking this forward a step, I’ve had other systems whereby the load could comprise any number of (monthly) periods from the current year. A little bit more fiddly, but achievable: as part of the prepare source data stage above, it is relatively straightforward to run a select distinct period query on the source data, spool the results to a file, and then use this file to construct that portion of the clear command (or, for a relatively small number, prepare a sequence of clear commands).

    The requirement I had recently falls into the latter category in that the volatile dimension (where “Period” would be the volatile dimension in the examples above) was a “product” dimension of sorts, and contained a lot of changed values each load. Several thousand, in fact. Far too many to loop around and build a single command, and far too many to run as individual commands—whilst on test, the “clears” themselves ran satisfyingly quickly, it obviously generated an undesirably large number of slices.

    So the problem was this: how to identify and clear data associated with several thousand members of a volatile dimension, the values of which could change totally from load to load.

    In short, the answer I arrived at is with a UDA.

    The TechRef does not explicitly say or give examples, but because the Uda function can be used within a CrossJoin reference, it can be used to effect a clear: assume the Product dimension had an UDA of CLEAR against certain members…

    alter database ‘Appname‘.’DBName‘ clear data in region ‘CrossJoin({Uda([Product], “CLEAR”)})’

    …would then clear all data for all of those members. If data for, say, just the ACTUAL scenario is to be cleared, this can be added to the CrossJoin:

    alter database ‘Appname‘.’DBName‘ clear data in region ‘CrossJoin({Uda([Product], “CLEAR”)}, {[ACTUAL]})’

    But we first need to set this UDA in order to take advantage of it. In the load script steps above, the first step is prepare source data, if required. At this point, a SQLplus call was inserted to a new procedure that

    1. examines the source load table for distinct occurrences of the “volatile” dimension
    2. populates a table (after initially truncating it) with a list of these members (and parents), and a third column containing the text “CLEAR”:

    picture1

    A “rules” file then needs to be built to load the attribute. Because the outline has already been maintained, this is simply a case of loading the UDA itself:

    picture2

    In the “Essbase Client” portion of the load script, prior to running the “clear” command, the temporary UDA table needs to be loaded using the rules file to populate the UDA for those members of the volatile dimension to be cleared:

    import database ‘AppName‘.’DBName‘ dimensions connect as ‘SQLUsername‘ identified by ‘SQLPassword‘ using server rules_file ‘PrSetUDA’ on error write to ‘LogPath/ASOCurrDataLoad_SetAttr.err’;

    picture3

     

    With the relevant slices cleared, the load can proceed as normal.

    After the actual data load has run, the UDA settings need to be cleared. Note that the prepared table above also contains an empty column, UDACLEAR. A second rules file, PrClrUDA, was prepared that loads this (4th) column as the UDA value—loading a blank value to a UDA has the same effect as clearing it.

    The broad steps of the load script therefore become these:

    • [prepare source data, if required]
    • ascertain members of volatile dimension to clear from load source
    • update table containing current load members / CLEAR attribute
    • Load CLEAR attribute table
    • Perform a logical clear
    • Load data to buffers
    • Load buffer(s) to new database slice(s)
    • [Merge slices]
    • Remove CLEAR attributes

    So not without limitations—if the data was volatile over two dimensions (e.g., Product A for Period 1, Product B for Period 2, etc.) the approach would not work (at least, not exactly as described, although in this instance you could possible iterate around the smaller Period dimension)—but overall, I think it’s a reasonable and flexible solution.

    Clear / Load Order

    While not strictly part of this solution, another little wrinkle to bear in mind here is the resource taken up by the logical clear. When initializing the buffer prior to loading data into it, you have the ability to determine how much of the total available resource is used for that particular buffer—from a total of 1.0, you can allocate (e.g.) 0.25 to each of 4 buffers that can then be used for a parallel load operation, each loaded buffer subsequently writing to a new database slice. Importing a loaded buffer to the database then clears the “share” of the utilization afforded to that buffer.

    Although not a “buffer initialization” activity per se, a (slice-generating) logical clear seems to occupy all of this resource—if you have any uncommitted buffers created, even with the lowest possible resource utilization of 0.01 assigned, the logical clear will fail:

    picture4

    The Essbase Technical Reference states at “Loading Data Using Buffers“:

    While the data load buffer exists in memory, you cannot build aggregations or merge slices, as these operations are resource-intensive.

    It could perhaps be argued that as we are creating a “clear slice,” not merging slices (nor building an aggregation), that the logical clear falls outside of this definition, but a similar restriction certainly appears to apply here too.

    This is significant as, arguably, the ideally optimum incremental load would be along the lines of

    • Initialize buffer(s)
    • Load buffer(s) with data
    • Effect partial logical clear (to new database slice)
    • Load buffers to new database slices
    • Merge slices into database

    As this would both minimize the time that the cube was inaccessible (during the merge), and also not present the cube with zeroes in the current load area. However, as noted above, this does not seem to be possible—there does not seem to be a way to change the resource usage (RNUM) of the “clear,” meaning that this sequence has to be followed:

    • Effect partial logical clear (to new database slice)
    • Initialize buffer(s)
    • Load buffer(s) with data
    • Load buffers to new database slices
    • Merge slices into database

    I.e., the ‘clear’ has to be fully effected before the initialization of the buffers. This works as you would expect, but there is a brief period—after the completion of the “clear” but before the load buffer(s) have been committed to new slices—where the cube is accessible and the load slice will show as “0” in the cube.

    The post ASO Slice Clears – How Many Members? appeared first on Rittman Mead Consulting.

    Categories: BI & Warehousing

    MORE Content to Ready You for Oracle Cloud Applications R11

    Linda Fishman Hoyle - Sun, 2016-03-13 20:39

    A Guest Post by Senior Director Louvaine Thomson (pictured left), Product Management, Oracle Cloud Applications

    The previous announcement of Release 11 preview material included:

    Spotlight Videos: Hosted by senior development staff, these webcast-delivered presentations highlight top level messages and product themes, and are reinforced with a product demo.
    Release Content Documents (RCDs): This content includes a summary level description of each new feature and product.

    We are now pleased to announce the next and final wave of readiness content. Specifically, the following content types are now available on the Release 11 Readiness page.

    • What's New: Learn about what's new in the upcoming release by reviewing expanded discussions of each new feature and product, including capability overviews, business benefits, setup considerations, usage tips, and more.

    • Release Training: Created by product management, these self-paced, interactive training sessions are deep dives into key new enhancements and products. Also referred to as Transfers of Information (TOIs).

    • Product Documentation: Oracle's online documentation includes detailed product guides and training tutorials to ensure your successful implementation and use of the Oracle Applications Cloud.


    Access is simple: From the Cloud Site: Click on Support > Release Readiness


    Rename all exported files to their original names after exporting from Oracle database using Oracle SQL Developer’s Shopping Cart

    Ittichai Chammavanijakul - Sun, 2016-03-13 15:08

    If you’re searching for “export Oracle BLOB”, the article, by Jeff Smith, titled “Exporting Multiple BLOBs with Oracle SQL Developer” using Oracle SQL Developer” is usually at the top of the search result. The SQL Developer features the Shopping Cart without using scripts to export BLOBs out of database. I don’t want to go into detail as Jeff already explained well in his post what it is and how to use it. One main issue of using this approach is that sometime you want the actual file names instead of the exported names. This can be overcame easily using a post-run script. I wrote this simple script in Python as it suites well with name manipulation. (I’m not a Python expert, but it is one of programming languages that is very easy to learn.)

    The script is just reply read from the FND_LOBS_DATA_TABLE.ldr file, which contains information about original filename and new exported filename (in the format of FND_LOBS_DATA_TABLExxxxx).

    # Sample data
     1889399|"CF.xlsx"|"application/octet-stream"|FND_LOBS_DATA_TABLE694b44cc-0150-1000-800d-0a03f42223fd.ldr|2014-05-20 12:11:41||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889403|"PriceList_quotation (20 May 2014) cust.xls"|"application/vnd.ms-excel"|FND_LOBS_DATA_TABLE694b4587-0150-1000-800e-0a03f42223fd.ldr|2014-05-20 12:18:02||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889807|"MS GROUP NORTH AMERICA INC1.pdf"|"application/pdf"|FND_LOBS_DATA_TABLE694b4613-0150-1000-800f-0a03f42223fd.ldr|||||"US"|"AL32UTF8"|"binary"|{EOL}

    # 1st = File ID (Media ID)
    # 2nd = Original File Name
    # 4th = Exported File Name
    # The remaining information is not relevant.

    The script separates all information, which is stored in a single line, by string {EOL} into multiple lines. It continues to split into each column based positions. The information we’re interested in is in the 1st, 2nd and 4th position. It then just calls the operating system to rename file.

    The content of the script rename.py as follows:

    
    
    from sys import argv
    import string
    import shutil
    import os
    # Script to rename exported BLOB files from Oracle SQL Developer tool
    #
    # Pre-requisite: Python 3.x https://www.python.org/downloads/
    #
    # Execution:
    # (1) Copy the script to the folder containing mapping file - "FND_LOBS_DATA_TABLE.ldr" and all exported files.
    # (2) Execute the script as follows
    #      C:\> cd deploy
    #      C:\> rename.py FND_LOBS_DATA_TABLE.ldr
    
    # Take parameters
    script, filename = argv
    # Open file in read-only mode
    file = open(filename, 'r', encoding="utf8")
    
    
    # Sample data - everything is stored in one line.
    # 1889399|"EPR - CF.xlsx"|"application/octet-stream"|FND_LOBS_DATA_TABLE694b44cc-0150-1000-800d-0a03f42223fd.ldr|2014-05-20 12:11:41||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889403|"PriceList_quotation_murata (20 May 2014) cust.xls"|"application/vnd.ms-excel"|FND_LOBS_DATA_TABLE694b4587-0150-1000-800e-0a03f42223fd.ldr|2014-05-20 12:18:02||"FNDATTCH"||"US"|"WE8MSWIN1252"|"binary"|{EOL} 1889807|"MGS GROUP NORTH AMERICA INC1.pdf"|"application/pdf"|FND_LOBS_DATA_TABLE694b4613-0150-1000-800f-0a03f42223fd.ldr|||||"US"|"AL32UTF8"|"binary"|{EOL}
    # 1st = File ID (Media ID)
    # 2nd = Actual/Original File Name
    # 3rd = File Type
    # 4th = Exported File Name
    # The remaining = Not relevant
    
    
    # First, split each by string {EOL} 
    splitted_line = file.read().split('{EOL}')
    
    
    # For each splitted line, split into each word, separated by |
    for s in splitted_line:
     # Split by |
     splitted_word = s.split('|')
     
     # If reaching the last line, which contains only [''], exit the loop.
     if len(splitted_word) == 1:
     break
     
     # The Original file name is in the 2nd word (list position #1)
     # Strip out double quotes and leading & trailing spaces if any
     orig_name = splitted_word[1].strip('"').strip() 
     
     # The Exported file name is in the 4th word (list position #3) 
     exported_name = splitted_word[3].strip() # Strip out leading & trailing spaces if any
     
     # We plan to prefix each file with its unique FILE_ID.
     # This is to avoid file name collision if two or more files have the same name
     # Also, strip out leading & trailing spaces if any
     file_id = splitted_word[0].strip() 
     
     # Rename file
     # Adjust the new file name according to your needs
     os.rename(exported_name, file_id + '_' + orig_name)
    
    

    After unzipping the deploy.zip, which is the default exported file from SQL Developer, copy the rename.py into this unzipped folder.

    C:\> cd deploy
    C:\> dir
    02/23/2016 07:57 PM 2,347 rename.py
    02/23/2016 07:57 PM 34,553 export.sql
    02/23/2016 07:52 PM 1,817 FND_LOBS.sql
    02/23/2016 07:57 PM 276 FND_LOBS_CTX.sql
    02/23/2016 07:57 PM 614 FND_LOBS_DATA_TABLE.ctl
    02/23/2016 07:52 PM 88,193 FND_LOBS_DATA_TABLE.ldr
    02/23/2016 07:57 PM 78,178 FND_LOBS_DATA_TABLE10fa4165-0153-1000-8001-0a2a783f1605.ldr
    02/23/2016 07:57 PM 27,498 FND_LOBS_DATA_TABLE10fa4339-0153-1000-8002-0a2a783f1605.ldr
    02/23/2016 07:57 PM 17,363 FND_LOBS_DATA_TABLE10fa43c5-0153-1000-8003-0a2a783f1605.ldr
    02/23/2016 07:57 PM 173,568 FND_LOBS_DATA_TABLE10ff189d-0153-1000-8219-0a2a783f1605.ldr
    :
    :
    
    
    C:\> rename.py FND_LOBS_DATA_TABLE.ldr
    
    
    C:\> dir
    02/23/2016 07:57 PM 2,347 rename.py
    02/23/2016 07:57 PM 34,553 export.sql
    02/23/2016 07:52 PM 1,817 FND_LOBS.sql
    02/23/2016 07:57 PM 276 FND_LOBS_CTX.sql
    02/23/2016 07:57 PM 614 FND_LOBS_DATA_TABLE.ctl
    02/23/2016 07:52 PM 88,193 FND_LOBS_DATA_TABLE.ldr
    02/23/2016 07:57 PM 78,178 689427_DATACOM SOUTH ISLAND LTD.htm
    02/23/2016 07:57 PM 27,498 698623_lincraft.htm
    02/23/2016 07:57 PM 17,363 772140_275131.htm
    02/23/2016 07:57 PM 173,568 3685533_RE 新办公室地址.MSG
    :
    :
    
    
    Categories: DBA Blogs

    Compression -- 3 : Index (Key) Compression

    Hemant K Chitale - Sun, 2016-03-13 04:34
    Unlike Table Compression that uses deduplication of column values, Index Compression is based on the keys.  Key Compression is also called Prefix Compression.

    This relies on repeated leading key values being eliminated.  Thus, for example, if the leading column of the composite index has frequently repeated values and because an Index is always an organised (sorted) structure, we find the repeated values appearing as if "sequentially".  Key Compression can eliminate the repeated values.

    Thus, it becomes obvious that Index Key Compression is usable for
    a.  A Composite Index of 2 or more columns
    b.  Repeated appearances of values in the *leading* key columns
    c.  Compression defined for a maximum of n-1 columns  (where n is the number of columns in the index).  That is, the last column cannot be compressed.
    Note that a Non-Unique Index automatically has the ROWID appended to it, so Key Compression can be applied to all the columns defined.

    Let's look at a few examples.

    Starting with creating a fairly large table (that is a multiplied copy of DBA_OBJECTS)

    PDB1@ORCL> create table target_data as select * from source_data where 1=2;

    Table created.

    PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

    364496 rows created.

    PDB1@ORCL> commit;

    Commit complete.

    PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

    364496 rows created.

    PDB1@ORCL> commit;

    Commit complete.

    PDB1@ORCL> insert /*+ APPEND */ into target_data select * from source_data;

    364496 rows created.

    PDB1@ORCL> commit;

    Commit complete.

    PDB1@ORCL>
    PDB1@ORCL> desc target_data
    Name Null? Type
    ----------------------------------------- -------- ----------------------------
    OWNER VARCHAR2(128)
    OBJECT_NAME VARCHAR2(128)
    SUBOBJECT_NAME VARCHAR2(128)
    OBJECT_ID NUMBER
    DATA_OBJECT_ID NUMBER
    OBJECT_TYPE VARCHAR2(23)
    CREATED DATE
    LAST_DDL_TIME DATE
    TIMESTAMP VARCHAR2(19)
    STATUS VARCHAR2(7)
    TEMPORARY VARCHAR2(1)
    GENERATED VARCHAR2(1)
    SECONDARY VARCHAR2(1)
    NAMESPACE NUMBER
    EDITION_NAME VARCHAR2(128)
    SHARING VARCHAR2(13)
    EDITIONABLE VARCHAR2(1)
    ORACLE_MAINTAINED VARCHAR2(1)

    PDB1@ORCL>


    What composite index is a good candidate for Key Compression ?
    *Not* an Index that begins with OBJECT_ID as that is a Unique value.

    Let's compare two indexes (compressed and non-compressed) on (OWNER, OBJECT_TYPE, OBJECT_NAME).

    PDB1@ORCL> create index target_data_ndx_1_comp on
    2 target_data (owner, object_type, object_name) compress 2;

    Index created.

    PDB1@ORCL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_1_COMP');

    PL/SQL procedure successfully completed.

    PDB1@ORCL> select leaf_blocks
    2 from user_indexes
    3 where index_name = 'TARGET_DATA_NDX_1_COMP'
    4 /

    LEAF_BLOCKS
    -----------
    5629

    PDB1@ORCL>


    PDB1@ORCL> drop index target_data_ndx_1_comp
    2 /

    Index dropped.

    PDB1@ORCL> create index target_data_ndx_2_nocomp on
    2 target_data (owner, object_type, object_name) ;

    Index created.

    PDB1@ORCL> exec dbms_stats.gather_index_stats('','TARGET_DATA_NDX_2_NOCOMP');

    PL/SQL procedure successfully completed.

    PDB1@ORCL> select leaf_blocks
    2 from user_indexes
    3 where index_name = 'TARGET_DATA_NDX_2_NOCOMP'
    4 /

    LEAF_BLOCKS
    -----------
    7608

    PDB1@ORCL>


    Note the "compress 2" specification for the first index.  That is an instruction to compress based on the leading 2 columns.
    Thus, the compressed index is 5,629 blocks but the normal, non-compressed index is 7,608 blocks.  We make a gain of 26% in the index size.

    Why did I choose OWNER, OBJECT_TYPE as the leading columns ?  Because I expected a high level of repetition on these column names.


    Note : I have not explored Advanced Index Compression available in 12.1.0.2
    Advanced Index Compression tested in 12.1.0.2
    .
    .

    Categories: DBA Blogs

    UKOUG Application Server & Middleware SIG – Summary

    Tim Hall - Sat, 2016-03-12 08:08

    ukougOn Thursday I did a presentation at the UKOUG Application Server & Middleware SIG.

    As I mentioned in my previous post, I was not able to stay for the whole day. I arrived about 30 minutes before my session was scheduled to start. The previous session finished about 10 minutes early and the speaker following me cancelled, so my 45 minute session extended to about 70 minutes. :)

     

    There had already been speakers focussing on Oracle Cloud and Amazon Web Services (AWS), so I did a live demo of Azure, which included building an Oracle Linux VM and doing an install of WebLogic and ADF. There was also a more general presentation about running Oracle products on the cloud. I’m not a WebLogic or cloud specialist, so this presentation is based on me talking about my experiences of those two areas. Peter Berry from Clckwrk and Paul Bainbridge from Fujitsu corrected me on a couple of things, which was cool.

    After my session I hung around for a quick chat, but I had to rush back to work to do an upgrade, which went OK. :)

    Thanks to the organisers for inviting me and thanks to everyone that came along. It would have been good to see the other presentations, but unfortunately that was not possible for me this time!

    Cheers

    Tim…

    PS. Simon, the preinstall packages were installed in the Oracle Linux templates. :)

    # rpm -qa | grep preinstall
    oracle-rdbms-server-12cR1-preinstall-1.0-8.el6.x86_64
    oracle-rdbms-server-11gR2-preinstall-1.0-7.el6.x86_64
    #
    UKOUG Application Server & Middleware SIG – Summary was first posted on March 12, 2016 at 3:08 pm.
    ©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

    WINDOW NOSORT STOPKEY + RANK()

    XTended Oracle SQL - Fri, 2016-03-11 18:23

    Recently I found that WINDOW NOSORT STOPKEY with RANK()OVER() works very inefficiently: http://www.freelists.org/post/oracle-l/RANKWINDOW-NOSORT-STOPKEY-stopkey-doesnt-work
    The root cause of this behaviour is that Oracle optimizes WINDOW NOSORT STOPKEY with RANK the same way as with DENSE_RANK:

    rnk1

    create table test(n not null) as 
      with gen as (select level n from dual connect by level<=100)
      select g2.n as n
      from gen g1, gen g2
      where g1.n<=10
    /
    create index ix_test on test(n)
    /
    exec dbms_stats.gather_table_stats('','TEST');
    select/*+ gather_plan_statistics */ n
    from (select rank()over(order by n) rnk
                ,n
          from test)
    where rnk<=3
    /
    select * from table(dbms_xplan.display_cursor('','','allstats last'));
    drop table test purge;
    

    [collapse]
    Output
             N
    ----------
             1
             1
             1
             1
             1
             1
             1
             1
             1
             1
    
    10 rows selected.
    
    PLAN_TABLE_OUTPUT
    -----------------------------------------------------------------------------------------------------------------------
    SQL_ID  8tbq95dpw0gw7, child number 0
    -------------------------------------
    select/*+ gather_plan_statistics */ n from (select rank()over(order by
    n) rnk             ,n       from test) where rnk<=3
    
    Plan hash value: 1892911073
    
    -----------------------------------------------------------------------------------------------------------------------
    | Id  | Operation              | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    -----------------------------------------------------------------------------------------------------------------------
    |   0 | SELECT STATEMENT       |         |      1 |        |     10 |00:00:00.01 |       3 |       |       |          |
    |*  1 |  VIEW                  |         |      1 |   1000 |     10 |00:00:00.01 |       3 |       |       |          |
    |*  2 |   WINDOW NOSORT STOPKEY|         |      1 |   1000 |     30 |00:00:00.01 |       3 | 73728 | 73728 |          |
    |   3 |    INDEX FULL SCAN     | IX_TEST |      1 |   1000 |     31 |00:00:00.01 |       3 |       |       |          |
    -----------------------------------------------------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
    
       1 - filter("RNK"<=3)
       2 - filter(RANK() OVER ( ORDER BY "N")<=3)
    

    [collapse]

    As you can see, A-Rows in plan step 2 = 30 – ie, that is the number of rows where

    DENSE_RANK<=3

    but not

    RANK<=3

    The more effective way will be to stop after first 10 rows, because 11th row already has RANK more than 3!
    But we can create own STOPKEY version with PL/SQL:

    PLSQL STOPKEY version
    create or replace type rowids_table is table of varchar2(18);
    /
    create or replace function get_rowids_by_rank(
          n          int
         ,max_rank   int
       ) 
       return rowids_table pipelined
    as
    begin
       for r in (
          select/*+ index_rs_asc(t (n))  */ rowidtochar(rowid) chr_rowid, rank()over(order by n) rnk
          from test t
          where t.n > get_rowids_by_rank.n
          order by n
       )
       loop
          if r.rnk <= max_rank then
             pipe row (r.chr_rowid);
          else
             exit;
          end if;
       end loop;
       return;
    end;
    /
    select/*+ leading(r t) use_nl(t) */
       t.*
    from table(get_rowids_by_rank(1, 3)) r
        ,test t
    where t.rowid = chartorowid(r.column_value)
    /
    

    [collapse]
    In that case the fetch from a table will stop when rnk will be larger than max_rank

    Categories: Development

    Pages

    Subscribe to Oracle FAQ aggregator