Skip navigation.

Feed aggregator

MAF 2.1 - Debugger Improvements and Mobile REST Client

Andrejus Baranovski - Sat, 2015-01-31 11:43
I was blogging previously about Oracle Mobile Suite and REST service transformation from ADF BC SOAP service. Today I'm going to blog the next step in the same series - MAF client consuming REST service exposed from Oracle Mobile Suite ESB. I'm going to highlight improvements in debugging process for MAF, provided with the latest 2.1 release. I had couple of challenges implementing and mapping programmatic MAF client for REST service, all these were solved and I would like to share the solution with you.

Previous posts on the same topic:

- Oracle Mobile Suite Service Bus REST and ADF BC SOAP

- How To Add New Operation in Oracle Mobile Suite Service Bus REST Service

Here you can download sample application implemented for today post (contains ADF BC SOAP, Mobile Suite Service Bus and MAF client apps) - MobileServiceBusApp_v3.zip.

High level view of the implemented use case - there are three parts. Service Bus is acting as a proxy to transform ADF BC SOAP service to light REST consumed by mobile REST client (implemented in MAF):


I would not recommend to use auto generated Data Control for REST service - I didn't had positive experience using it. You should invoke REST service from Java, using MAF helper classes (read - here). There must be REST connection defined and tested, later it can be accessed and used directly from Java. REST connection endpoint must include REST service URL (in my case, it by default points to getEmployeesListAll operation):


It is enough to define REST connection, now we can start using it.

Sample application implements REST helper class. Here actual REST call is made through MAF RestServiceAdapter class - see invokeRestService method. Public method invokeFind(requestURI) is supposed to be used to initiate search request. Content type is set to be JSON. Search parameter is always passed as a part of requestURI, therefore we are passing empty post data value:


MAF application contains Java class - ServiceHelper, this class is used to implement Data Control. Each time when search parameter is set, applyFilter method is invoked and it makes REST request. Response is received and with the help of MAF utility (highlighted) is transformed from String to the array of objects:


ServiceHelper class includes a set of variables required to perform successful REST request. Data collection object - employees of type EmployeeBO[] is defined here and later exposed through Data Control:


As you can see from above example of applyFilter method - response String is transformed to the array of objects. I'm using EmployeesList class for conversion. This class contains array of objects to parse from REST response. It is very important to have exactly the same name for array variable as your REST response result set name. In my example, this name is employees:


Finally there is class to represent response object - EmployeeBO. This class is a POJO with attribute definitions:


ServiceHelper class is exposed to Data Control - defined methods will be accessible from MAF bindings layer:


This is all about REST client in MAF implementation - I would say quite straightforward, when you are on the right path.

Let's have a few words about debugger in MAF 2.1. I would say I was positively impressed with debugger improvements. There is no need to do any extra configuration steps anymore, simply right click anywhere in Java class you are currently working on and choose Debug option - Debugger will start automatically and MAF application will be loaded in iOS simulator.

It works with all defaults, you could go and double check the settings in Run/Debug section of MAF project - Mobile Run Configuration window:


Right click and choose debug, don't forget to set a breakpoint:


Type any value to search in the sample MAF application - Employees form. You should see breakpoint is activated in JDEV (as it calls applyFilter method):


Debugger displays value we are searching for:


After conversion from response String to array of objects happens, we could inspect the collection and check if conversion happened correctly:


Response from REST is successfully displayed in MAF UI:

Endeca Information Discovery Integration with Oracle BI Apps 11.1.1.8.1

Rittman Mead Consulting - Sat, 2015-01-31 10:46

One of the new features that made its way into Oracle BI Apps 11.1.1.8.1, and that completely passed me by at the time, was integration with Oracle Endeca Information Discovery 3.1 via a new ODI11g plug-in. As this new integration brings the Endeca “faceted search” and semi-structured data analysis capabilities to the BI Apps this is actually a pretty big deal, so let’s take a quick look now at what this integration brings and how it works under-the-covers.

Oracle BI Apps 11.1.1.8.1 integration with Oracle Endeca Information Discovery 3.1 requires OEID to be installed either on the same host as BI Apps or on its own server, and uses a custom ODI11g KM called “IKM SQL to Endeca Server 3.1.0” that needs to be separately downloaded from Oracle’s Edelivery site. The KM download also comes with an ODI Technology XML import file that adds Endeca Server as a technology type in the ODI Topology Navigator, and its here that the connection to the OEID 3.1 server is set up for subsequent data loads from BI Apps 11g.

NewImage

As the IKM has “SQL” as the loading technology specified this means any source that can be extracted from using standard JDBC drivers can be used to load the Endeca Server, and the KM itself has options for creating the schema to go with the data upload, creating the Endeca Server record spec, specifying the collection name and so on. BI Apps 11.1.1.8.1 ships with a number of interfaces (mappings) and ODI load plans to populate Endeca Server domains for specific BI Apps fact groups, with each interface taking all of the relevant repository fact measures and dimension attributes and loading them into one big Endeca Server denormalized schema.

NewImage

When you take a look at the physical (flow) tab for the interface, you can see that the data is actually extracted via the BI Apps RPD and the BI Server, then staged in the BI Apps DW database before being loaded into the Endeca Server domain. Column metadata in the BI Server repository is then used to define the datatypes, names, sizes and other details of the Endeca Server domain schema, an improvement over just pointing Endeca Information Discovery Integrator at the BI Apps database schema.

NewImage

The 11.1.1.8.1 ODI repository also provides a pre-built Endeca Server load plan that allows you to turn-on and turn-off loads for particular fact groups, and sets up the data domain name and other variables needed for the Endeca Server data load.

NewImage

Once you’ve loaded the BI Apps data into one or more Endeca Server data domains, there are a number of pre-build sample Endeca Studio applications you can use to get a feel for the capabilities of the Endeca Information Discovery Studio interface. For example, the Student Information Analytics Admissions and Recruiting sample application lets you filter on any attribute from the dataset on the left-hand side, and then returns instantaneously the filtered dataset displayed in the form of a number of graphs, word clouds and other visualizations.

NewImage

Another interesting sample application is around employee expense analysis. Using features such as word clouds, its relatively easy to see which expense categories are being used the most, and you can use Endeca Server’s search and text parsing capabilities to extract keywords and other useful bits of information out of free-text areas, for example supporting text for expense claims.

NewImage

Typically, you’d use Endeca Information Discovery as a “dig around the data, let’s look for something interesting” tool, with all of your data attributes listed in one place and automatic filtering and focusing on the filtered dataset using the Endeca Server in-memory search and aggregation engine. The sample applications that ship with 11.1.1.8.1 are just meant as a starting point though (and unlike the regular BI Apps dashboards and reports, aren’t themselves supported as an official part of the product), but as you can see from the Manufacturing Endeca Studio application below, there’s plenty of scope to create returns, production and warranty-type applications that let you get into the detail of the attribute and textual information held in the BI Apps data warehouse.

NewImage

More details on the setup process are in the online docs, and if you’re new to Endeca Information Discovery take a look at our past articles on the blog on this product.

Categories: BI & Warehousing

12c Dataguard: Restore Data File From Service

Oracle in Action - Fri, 2015-01-30 23:15

RSS content

Starting with Oracle Database 12c, in a Data Guard environment, you can restore data files on a primary (standby) database by connecting to a standby (primary) database over the network .

RMAN restores database files, over the network, from the physical standby (primary) database by using the FROM SERVICE clause of the RESTORE command. The FROM SERVICE clause provides the service name of the physical standby (primary) database from which the files must be restored. During the restore operation, RMAN creates backup sets, on the physical standby database (primary), of the files that need to be restored and then transfers these backup sets to the target database over the network.”

 Optionally, you can use SECTION SIZE to restore files from the source database as multisection backup sets. You can also compress the transferred files by specifying the USING COMPRESSED BACKUPSET.

Prerequisites for restoring Files from remote host :

  • The password file on the source database and the target database must be the same.
  • The tnsnames.ora file in the target database must contain an entry that corresponds to the remote database.

In this post, I will demonstrate restore of a data file on primary  from standby using service clause of RMAN  Restore command.

Current scenario:

  • Primary CDB : Boston
  • Physical Standby CDB : London
  • PDB : Dev1

– Create a new tablespace called sample in PDB dev1 on primary (boston)

BOSTON>alter session set container=dev1;
        create tablespace sample
        datafile       '/u01/app/oracle/oradata/boston/dev1/sample01.dbf'
         size 5m;

– Verify that parameter standby_file_management = auto
on standby database  (london)

LONDON>sho parameter standby_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
standby_file_management string AUTO

– Verify that datafile for tablespace sample has been created on physical standby  (london)

LONDON>select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/london/system01.dbf
/u01/app/oracle/oradata/london/sysaux01.dbf
/u01/app/oracle/oradata/london/undotbs01.dbf
/u01/app/oracle/oradata/london/pdbseed/system01.dbf
/u01/app/oracle/oradata/london/users01.dbf
/u01/app/oracle/oradata/london/pdbseed/sysaux01.dbf
/u01/app/oracle/oradata/london/dev1/system01.dbf
/u01/app/oracle/oradata/london/dev1/sysaux01.dbf
/u01/app/oracle/oradata/london/dev1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/london/dev1/example01.dbf
/u01/app/oracle/oradata/london/dev1/sample01.dbf

– Create table hr.employees2 in new tablespace sample on primary

BOSTON>sho con_name

CON_NAME
------------------------------
DEV1

BOSTON>create table hr.employees2 tablespace sample
       as select * from hr.employees;
      select count(*) from hr.employees2;

COUNT(*)
----------
107

– To simulate loss of datafile, rename  sample01.dbf to sample01.sav on primary host

BOSTON>!mv /u01/app/oracle/oradata/boston/dev1/sample01.dbf /u01/app/oracle/oradata/boston/dev1/sample01.sav

– Restart primary – error while opening as datafile is missing

BOSTON>conn / as sysdba

       shu abort;
       startup
Database mounted.
ORA-01157: cannot identify/lock data file 12 - see DBWR trace file
 ORA-01110: data file 12: '/u01/app/oracle/oradata/boston/dev1/sample01.dbf'

– Take the missing datafile offline  on primary and then open primary database

BOSTON>alter session set container=dev1;
       alter tablespace sample datafile offline;
       alter session set container=cdb$root;
       alter database open;

BOSTON>sho pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 DEV1 MOUNTED

BOSTON>alter pluggable database dev1 open;
-- Connect to primary (boston)  using RMAN
[oracle@host01 ~]$ . oraenv
ORACLE_SID = [boston] ?

[oracle@host01 ~]$ rman target /

-- Restore datafile from physical standby database (london) over network

RMAN> restore tablespace dev1:sample from service 'london';

Starting restore at 23-JAN-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service london
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00012 to /u01/app/oracle/oradata/boston/dev1/sample01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 23-JAN-15

– Recover the restored tablespace using archivelogs available
locally on primary database (boston)

RMAN> recover tablespace dev1:sample;

Starting recover at 23-JAN-15
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 23-JAN-15

– Bring tablespce online

BOSTON>alter session set container=dev1;
       alter tablespace sample datafile online;
       select count(*) from hr.employees2;

COUNT(*)
----------
107

I hope this post was useful.
Your comments and suggestions are always welcome.

References:

https://docs.oracle.com/database/121/RCMRF/rcmsynta2008.htm#RCMRF149

http://docs.oracle.com/database/121/BRADV/rcmadvre.htm#BRADV681



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [12c Dataguard: Restore Data File From Service], All Right Reserved. 2015.

The post 12c Dataguard: Restore Data File From Service appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Getting started with iOS development using Eclipse and Java

Shay Shmeltzer - Fri, 2015-01-30 16:05

Want to use Eclipse to build an on-device mobile application that runs on iOS devices (iPhones and iPads)?

No problem - here is a step by step demo on how to do this:

Oh, and by the way the same app will function also on Android without any changes to the code :-)  

This is an extract from an online seminar that I recorded for one of Oracle's Virtual Technology Summits - and I figured people who didn't sign up for that event might still benefit from having access to the demo part of the video.

In the demo I show how to build an on-device app that access local data as well as remote data through web services, and how easy it is to integrate device features too.

If you want to try this on your own, get a copy of the Oracle Enterprise Pack for Eclipse, and follow the setup steps in the tutorial here.

And then just follow the video steps.

The location of the web service I accessed is at: http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL

And the Java classes I use to simulate local data are  here.



Categories: Development

What's new in PostgreSQL 9.4?

Chris Foot - Fri, 2015-01-30 15:02

Hi, welcome to RDX! The PostgreSQL Global Development Group recently unveiled PostgreSQL 9.4. The open source community maintains this iteration reinforces the group’s three core values: flexibility, scalability and performance.

In previous versions, JSON only allowed data to be stored in plain text format. In contrast, JSONB could only be entered in binary. Now, PostgreSQL 9.4 can use either relational or non-relational data stores simultaneously.

PostgreSQL’s Generalized Inverted Indexes are also three times faster. Speaking of speed, 9.4 comes with expedited parallel writing to the engine’s transaction log. In addition, users can rapidly reload the database cache on restart by using the pg_prewarm command.

Another notable feature is 9.4’s support for Linux Huge Pages for servers with large memory. This capability reduces overhead, and can be implemented by setting huge_pages to “on.”

Thanks for watching! Visit us next time for more PostgreSQL news!

The post What's new in PostgreSQL 9.4? appeared first on Remote DBA Experts.

Growth in machine-generated data

DBMS2 - Fri, 2015-01-30 13:31

In one of my favorite posts, namely When I am a VC Overlord, I wrote:

I will not fund any entrepreneur who mentions “market projections” in other than ironic terms. Nobody who talks of market projections with a straight face should be trusted.

Even so, I got talked today into putting on the record a prediction that machine-generated data will grow at more than 40% for a while.

My reasons for this opinion are little more than:

  • Moore’s Law suggests that the same expenditure will buy 40% or so more machine-generated data each year.
  • Budgets spent on producing machine-generated data seem to be going up.

I was referring to the creation of such data, but the growth rates of new creation and of persistent storage are likely, at least at this back-of-the-envelope level, to be similar.

Anecdotal evidence actually suggests 50-60%+ growth rates, so >40% seemed like a responsible claim.

Related links

Categories: Other

3 new things about sdsql

Kris Rice - Fri, 2015-01-30 12:30
New Name ! The first is a new name this EA it's named sqlcl for sql command line.  However, the binary to start it up is simply sql.  Nothing is easier when you need to run some sql than typing 'sql' and hitting enter. #./sql klrice/klrice@//localhost/orcl SQLcl: Release 4.1.0 Beta on Fri Jan 30 12:53:05 2015 Copyright (c) 1982, 2015, Oracle. All rights reserved. Connected to: Oracle

Oracle Audit Vault - Oracle Client Identifier and Last Login

Several standard features of the Oracle database should be kept in mind when considering what alerts and correlations are possible when combining Oracle database and application log and audit data.

Client Identifier

Default Oracle database auditing stores the database username but not the application username.  In order to pull the application username into the audit logs, the CLIENT IDENTIFIER attribute needs to be set for the application session which is connecting to the database.  The CLIENT_IDENTIFIER is a predefined attribute of the built-in application context namespace, USERENV, and can be used to capture the application user name for use with global application context, or it can be used independently. 

CLIENT IDENTIFIER is set using the DBMS_SESSION.SET_IDENTIFIER procedure to store the application username.  The CLIENT IDENTIFIER attribute is one the same as V$SESSION.CLIENT_IDENTIFIER.  Once set you can query V$SESSION or select sys_context('userenv','client_identifier') from dual.

The table below offers several examples of how CLIENT_IDENTIFIER is used.  For each example, for Level 3 alerts, consider how the value of CLIENT_IDENTIFIER could be used along with network usernames, enterprise applications usernames as well as security and electronic door system activity logs.

Oracle CLIENT_IDENTIFIER

Application

Example of how used

E-Business Suite

As of Release 12, the Oracle E-Business Suite automatically sets and updates client_identifier to the FND_USER.USERNAME of the user logged on.  Prior to Release 12, follow Support Note How to add DBMS_SESSION.SET_IDENTIFIER(FND_GLOBAL.USER_NAME) to FND_GLOBAL.APPS_INITIALIZE procedure (Doc ID 1130254.1)

PeopleSoft

Starting with PeopleTools 8.50, the PSOPRID is now additionally set in the Oracle database CLIENT_IDENTIFIER attribute. 

SAP

With SAP version 7.10 above, the SAP user name is stored in the CLIENT_IDENTIFIER.

Oracle Business Intelligence Enterprise Edition(OBIEE)

When querying an Oracle database using OBIEE the connection pool username is passed to the database.  To also pass the middle-tier username, set the user identifier on the session.  To do this in OBIEE, open the RPD, edit the connection pool settings and create a new connection script to run at connect time.  Add the following line to the connect script:

 

CALL DBMS_SESSION.SET_IDENTIFIER('VALUEOF(NQ_SESSION.USER)')

 

Last Login

Tracking when database users last logged in is a common compliance requirement.  This is required in order to reconcile users and cull stale users.  New with Oracle12c, Oracle provides this information for database users.  The system table SYS.DBA_USERS has a column, last_login. 

Example:

select username, account_status, common, last_login

from sys.dba_users

order by last_login asc;

Username

Account_Status

Common

Last_Login

C##INTEGRIGY

OPEN

YES

05-AUG-14 12.46.52.000000000 PM AMERICA/NEW_YORK

C##INTEGRIGY_TEST_2

OPEN

YES

02-SEP-14 12.29.04.000000000 PM AMERICA/NEW_YORK

XS$NULL

EXPIRED & LOCKED

YES

02-SEP-14 12.35.56.000000000 PM AMERICA/NEW_YORK

SYSTEM

OPEN

YES

04-SEP-14 05.03.53.000000000 PM AMERICA/NEW_YORK

 

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingOracle Audit VaultOracle Database
Categories: APPS Blogs, Security Blogs

OTNYathra 2015

Oracle in Action - Fri, 2015-01-30 05:23

RSS content

The Oracle ACE directors and Java champions will be organizing an evangelist event called ‘OTNYathra 2015’  during February 2015. during which a series of 7 conferences will be held across 7 major cities of India  in a time period of 2 weeks.  This event will bring the Oracle community together, spread the knowledge and increase the networking opportunities in the region. The detailed information about the event can be viewed at http://www.otnyathra.com.

I will be presenting a session on Adaptive Query Optimization on 13th Feb 2015 at FMDI, Sector 17B, IFFCO Chowk , Gurgaon.

Thanks to Sir Murali Vallath  and his team for organizing it and giving me an opportunity to present.

Hope to see you there!!

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [OTNYathra 2015], All Right Reserved. 2015.

The post OTNYathra 2015 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

SQL Server: Online index rebuild & minimally logged operations

Yann Neuhaus - Fri, 2015-01-30 01:37

A couple of days ago, I encountered an interesting case with a customer concerning a rebuild index operation on a datawarehouse environment. Let me introduce the situation: a “usual DBA day” with an almost usual error message found in your dedicated mailbox: “The transaction log for database 'xxx' is full”. After checking the concerned database, I notice that its transaction log has grown up and has fulfilled the entire volume. In the same time, I also identify the root cause of our problem: an index rebuild operation performed last night that concerns a big index (approximately 20 GB in size) on a fact table. On top of all, the size of the transaction log before raising the error message was 60 GB.

As you know, on datawarehouse environment, the database recovery model is usually configured either to SIMPLE or BULK_LOGGED to minimize write operations of bulk activity and of course the concerned database meets this requirement. According to the Microsoft document we could expect to get minimally logged records for index rebuild operations (ALTER INEX REBUILD) regardless the offline / online mode used to rebuild the index. So why the transaction log has grown heavily in this case?

To get a response we have first to take a look at the rebuild index tool used by my customer: the OLA script with INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE values for FragmentationHigh parameter. Don't worry OLA scripts work perfectly and the truth is out there :-) In the context of my customer, rebuild indexes online was permitted because the edition of the concerned SQL Server instance was Enterprise and this is precisely where we have to investigate here.

Let me demonstrate with a pretty simple example. On my lab environment I have a SQL Server 2014 instance with Enterprise edition. This instance hosts the well-known AdventureWorks2012 database with a dbo.bigTransactionHistory_rs1 table (this table is derived from the original script provided by Adam Machanic).

Here the current size of the AdventureWorks2012 database:

 

select        name as logical_name,        physical_name,        size / 128 as size_mb,        type_desc,        cast(FILEPROPERTY(name, 'SpaceUsed') * 100. / size as decimal(5, 2)) as [space_used_%] from sys.database_files

 

blog_28_1_database_situation

 

and here the size of the dbo.bigTransactionHistory_rs1 table:

 

exec sp_spaceused@objname = N'dbo.bigTransactionHistory_rs1'; go

 

blog_28_2_bigTransactionHistory_rs1_situation

 

Total used size: 1.1 GB

Because we are in SIMPLE recovery model, I will momentary disable the checkpoint process in order to have time to get log records inside the transaction log by using the traceflag 3505

 

dbcc traceon(3505, -1);


Case 1: ALTER REBUID INDEX OFFLINE


alter index pk_bigTransactionHistory on dbo.bigTransactionHistory_rs1 rebuild with (online = off, maxdop = 1); go

 

Let's check the size of transaction log of the AdventureWorks2012 database

 

blog_28_3_database_situation_after_rebuild_offline

 

Case 2: ALTER REBUID INDEX ONLINE


-- to initiate a tlog truncation before rebuilding the same index online Checkpoint;   alter index pk_bigTransactionHistory on dbo.bigTransactionHistory_rs1 rebuild with (online = off, maxdop = 1); go

 

Let's check again the size of the transaction log of the AdventureWorks2012 database:

 

blog_28_3_database_situation_after_rebuild_online

 

It is clear that we have an obvious difference of size concerning the transaction log for each operation.

- offline: 4096 * 0.35% = 14MB - online: 4096 * 5.63% = 230MB.

Stay curious and let's have a look deeper at the records written inside the transaction log for each mode by using the undocumented function sys.fn_dblog() as follows:

 

select        COUNT(*) as nb_records,        SUM([Log Record Length])/ 1024 as kbytes from sys.fn_dblog(NULL, NULL); go

 

Offline Online  blog_28_4_tlog_detail_offline_mode  blog_28_4_tlog_detail_online_mode

 

As expected we may notice a lot of records with index rebuild online operation comparing to the index rebuild offline operation (x21)

Let's continue looking at the operations performed by SQL Server during the index rebuild operation in both cases:

 

select        Operation,        COUNT(*) as nb_records,        SUM([Log Record Length])/ 1024 as kbytes from sys.fn_dblog(NULL, NULL) group by Operation order by kbytes desc go

 

Offline Online blog_28_5_tlog_detail_offline_mode_2 blog_28_5_tlog_detail_online_mode_2

 

The above picture is very interesting because we may again see an obvious difference between each mode. For example, if we consider the operations performed in the second case (on the right, some of them doesn't concern bulk activity as LOP_MIGRATE_LOCKS, LOP_DELETE_ROWS, LOP_DELETE_SPLITS, LOP_MODIFY_COLUMS an unknown allocation unit, which probably concerns the new structure. At this point I can't confirm it (I don't show here all details o these operations. I let you see by yourself). Furthermore, in the first case (on the left), the majority of operations concerns only LOP_MODIFY_OPERATION on the PFS page (context).

Does it mean that the online mode doesn't use minimaly mechanism for the whole rebuild process? I retrieved an interesting response from this Microsoft KB which confirms my suspicion.

Online Index Rebuild is a fully logged operation on SQL Server 2008 and later versions, whereas it is minimally-logged in SQL Server 2005. The change was made to ensure data integrity and more robust restore capabilities.

However I guess we don't have the same behavior than the FULL recovery model here. Indeed, there still exists a difference between SIMPLE / BULK_LOGGED and FULL recovery models in term of amount of log records generated. Here a picture of the transaction log size after rebuilding the big index online in full recovery model in my case:

 

blog_28_3_database_situation_after_rebuild_online_full_recovery

 

Ouch! 230MB (SIMPLE / BULK-LOGGED) vs 7GB  (FULL). It is clear that using FULL recovery model with rebuild index online operations will have a huge impact on the transaction log compared to the SIMPLE / BULK-LOGGED recovery model. So the solution in my case consisted in switching to offline mode or at least reducing the online operation for the concerned index.

Happy maintenance!

WebLogic Maven Plugin - Simplest Example

Steve Button - Fri, 2015-01-30 00:34
I've seen a question or two in recent days on how to configure the weblogic maven plugin.

The official documentation is extensive ... but could be considered TL;DR for a quick bootstrapping on how to use it.

As a late friday afternoon exercise I just pushed out an example of a very simple project that uses the weblogic-maven-plugin to deploy a web module.  It's almost the simplest configuration that can be done of the plugin to perform deployment related operations of a project/module:

https://github.com/buttso/weblogic-maven-plugin

This relies on the presence of either a local/corporate repository that contains the set of weblogic artefacts and plugins - OR - you configure and use the Oracle Maven Repository instead.  

Example pom.xml

Log Buffer #408, A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2015-01-29 21:45

This Log Buffer Edition covers various innovative blog posts from various fields of Oracle, MySQL and SQL Server. Enjoy!!!


Oracle:

A user reported an ORA-01114 and an ORA-27069 in a 3rd party application running against an Oracle 11.1 database.

Oracle SOA Suite 12c: Multithreaded instance purging with the Java API.

Oracle GoldenGate for Oracle Database has introduced several features in Release 12.1.2.1.0.

Upgrade to 12c and Plugin – one fast way to move into the world of Oracle Multitenant.

The Oracle Database Resource Manager (the Resource Manager) is an infrastructure that provides granular control of database resources allocated to users, applications, and services. The Oracle Database Resource Manager (RM) enables you to manage multiple workloads that are contending for system and database resources.

SQL Server:

Database ownership is an old topic for SQL Server pro’s.

Using T-SQL to Perform Z-Score Column Normalization in SQL Server.

The APPLY operator allows you to join a record set with a function, and apply the function to every qualifying row of the table (or view).

Dynamic Management Views (DMVs) are a significant and valuable addition to the DBA’s troubleshooting armory, laying bare previously unavailable information regarding the under-the-covers activity of your database sessions and transactions.

Grant Fritchey reviews Midnight DBA’s Minion Reindex, a highly customizable set of scripts that take on the task of rebuilding and reorganizing your indexes.

MySQL:

It’s A New Year – Take Advantage of What MySQL Has To Offer.

MySQL High Availability and Disaster Recovery.

MariaDB Galera Cluster 10.0.16 now available.

Multi-threaded replication with MySQL 5.6: Use GTIDs!

MySQL and the GHOST: glibc gethostbyname buffer overflow.

Categories: DBA Blogs

From 0 to Cassandra – An Exhaustive Approach to Installing Cassandra

Pythian Group - Thu, 2015-01-29 21:44

All around the Internet you can find lots of guides on how to install Cassandra on almost every Linux distro around. But normally all of this information is based on the packaged versions and omit some parts that are deemed essential for proper Cassandra functioning.

Note: If you are adding a machine to an existing Cluster please approach this guide with caution and replace the configurations here recommended by the ones you already have on your cluster, specially the Cassandra configuration.

Without further conversation lets start!

Essentials

Start your machine and install the following:

  • ntp (Packages are normally ntp, ntpdata and ntp-doc)
  • wget (Unless you have your packages copied over via other means)
  • vim (Or your favorite text editor)

Retrieve the following packages

Installation
Set up NTP

This can be more or less dependent of your system, but the following commands should do it (You can check this guide also):

~$ chkconfig ntpd on
~$ ntpdate pool.ntp.org
~$ service ntpd start
Set up Java (Let’s assume we are doing this in /opt)

Extract Java and install it:

~$ tar xzf [java_file].tar.gz
~$ update-alternatives --install /usr/bin/java java /opt/java/bin/java 1 

Check that is installed:

~$ java -version
java version '1.7.0_75'
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)

Let’s put JNA into place

~$ mv jna-VERSION.jar /opt/java/lib
Set up Cassandra (Let’s assume we are doing this in /opt)

 

Extract Cassandra:

~$ tar xzf [cassandra_file].tar.gz

Create Cassandra Directories:

~$ mkdir /var/lib/cassandra
~$ mkdir /var/lib/cassandra/commitlog
~$ mkdir /var/lib/cassandra/data
~$ mkdir /var/lib/cassandra/saved_caches
~$ mkdir /var/log/cassandra
Configuration  Linux configuration
~$ vim /etc/security/limits.conf

Add the following:

root soft memlock unlimited
root hard memlock unlimited
root – nofile 100000
root – nproc 32768
root – as unlimited

CentOS, RHEL, OEL, set in the following in /etc/security/limits.d/90-nproc.conf:

* – nproc 32768

Add the following to the sysctl file:

~$ vim /etc/sysctl.conf
vm.max_map_count = 131072

Finally (Reboot also works):

~$ sysctl -p

Firewall, the following ports must be open:

# Internode Ports
7000    Cassandra inter-node cluster communication.
7001    Cassandra SSL inter-node cluster communication.
7199    Cassandra JMX monitoring port.
# Client Ports
9042    Cassandra client port (Native).
9160    Cassandra client port (Thrift).

Note: Some/Most guides tell you to disable swap, I think of swap as an acrobat’s safety net, it should never have to be put to use, but in need it exists. As such, I never disable it and I put a low swappiness (around 10). You can read more about it here and here.

 Cassandra configuration

Note: Cassandra has a LOT of settings, these are the ones you should always set if you are going live. Lots of them depend on hardware and/or workload. Maybe I’ll write a post about them in the near future. In the meantime, you can read about them here.

~$ vim /opt/cassandra/conf/cassandra.yaml

Edit the following fields:

cluster_name: <Whatever you would like to call it>
data_file_directories: /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog

saved_caches_directory: /var/lib/cassandra/saved_caches

# Assuming this is your first node, this should be reachable by other nodes
seeds: “<IP>”

# This is where you listen for intra node communication
listen_address: <IP>

# This is where you listen to incoming client connections
rpc_address: <IP>

endpoint_snitch: GossipingPropertyFileSnitch

Edit the snitch property file:

~$ vim  /opt/cassandra/conf/cassandra-rackdc.properties:

Add the DC and the RACK the server is in. Ex:

dc=DC1
rack=RAC1

Finally make sure your logs go to /var/log/cassandra:

~$ vim /opt/cassandra/conf/logback.xml
Testing

Start Cassandra

~$ service cassandra start

You should see no error here, wait a bit then:

~$ grep  JNA /var/log/cassandra/system.log
INFO  HH:MM:SS JNA mlockall successful

Then check the status of the ring:

~$ nodetool status
Datacenter: DC1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Owns   Host ID                               Token                                    Rack
UN  185.10.49.136  140.59 KB  100.0%  5c3c697f-8bfd-4fb2-a081-7af1358b313f  0                                        RAC

Creating a keyspace a table and inserting some data:

~$ cqlsh xxx.yy.zz.ww

cqlsh- CREATE KEYSPACE sandbox WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', DC1 : 1};
Should give no errors
cqlsh- USE sandbox;
cqlsh:sandbox- CREATE TABLE data (id uuid, data text, PRIMARY KEY (id));
cqlsh:sandbox- INSERT INTO data (id, data) values (c37d661d-7e61-49ea-96a5-68c34e83db3a, 'testing');
cqlsh:sandbox- SELECT * FROM data;

id                                   | data
--------------------------------------+---------
c37d661d-7e61-49ea-96a5-68c34e83db3a | testing

(1 rows)

And we are done, you can start using your Cassandra node!

Categories: DBA Blogs

Updated WebLogic Server 12.1.3 Developer Zip Distribution

Steve Button - Thu, 2015-01-29 18:50
We've just pushed out an update to the WebLogic Server 12.1.3 Developer Zip distribution containing the bug fixes from a recent PSU (patch set update).

This is great for developers since it maintains the high quality of the developer zip distribution and the convenience it provides - avoids reverting to the generic installer to then enable the application of patch set updates.  For development use only.

Download it from OTN:

http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-for-dev-1703574.html

Check out the readme for the list of bug fixes:

http://download.oracle.com/otn/nt/middleware/12c/wls/1213/README_WIN_UP1.txt

Preparing architecture for APEX 5.0 upgrade

Dimitri Gielis - Thu, 2015-01-29 18:10
I doubted to set the title of this post to "Running APEX 4.2 and 5.0 in the same Oracle instance", but decided not to do that, but that is basically what I will do. Before going into details, I'll share my architecture.

In December 2013 I wrote it was time to update your APEX environment and I gave a quick overview of the architecture we're using. I thought it's time to review that post, so below you find how my preferred APEX architecture is today and tomorrow (once APEX 5 is production).


I'm using Apache as a reverse proxy in front of Tomcat. I'm not going in too much detail about which version to take: Apache v2.2 vs 2.4 and Tomcat v7 vs 8. There're many threads on the internet about that and I guess it depends your environment and your personal feeling. I've been using both versions but currently I'm on Apache v2.2 because it comes as a default with RHEL / OEL6 and SELinux is configured out of the box to protect the webserver. And for Tomcat I'm using v8 as that's the basis for the future versions of Tomcat (v9) and when you want to do Websockets for example, v8 has a more improved spec.

A few years ago we had the discussion about mod_plsql vs APEX Listener (now Oracle REST Data Services - ORDS). I think today, it's clear ORDS is the way to go as it gives you much more features and is proven technology.

For APEX I'm always on the latest version as fast as I can, as with every new release there're great improvements and fixes.

And finally the Oracle Database I'm on 12cR1 because I like the pluggable database concept and the other features it brings. I guess most people will go to 12c very soon, as 11.2 premier support ends this month. You can read more about when what is still supported in this doc.

But in this post I want to show you how easy it is to prepare your environment for APEX 5 and to test the upgrade with an architecture as above. I basically want to run APEX 4.2 and 5.0 next to each other. I'll clone my PDB and apply the APEX 5 installation on the new PDB. Next I'll configure ORDS so it knows to which database it needs to point to depending the url I'll call.

Step 1. Clone the PDB 

sqlplus / as sysdba

create pluggable database APEX50_PDB from APEX42_PDB file_name_convert=('/u01/app/oracle/oradata/cdb/APEX42_PDB','/u01/app/oracle/oradata/cdb/APEX50_PDB');


Step 2. Open the PDB and install APEX 5.0

alter pluggable database apex50_pdb open;

So now we have a new database which is a copy of our existing database open and ready to be used. Next we will need to install APEX 5.0. As the time of writing APEX 5.0 is not available yet, but it will probably be - connect to the new PDB and run @apexins... (follow the installation guide of APEX 5.0 once available!)


Step 3. Configure ORDS

With SQL Developer you can configure ORDS and add the connection to the new PDB.
In SQL Developer 4.1, first setup a connection to your ORDS (Tools > Manage REST Data Services Connections > Add Connection). Next open the ORDS Administration window (View > REST Data Services > Administration). Right click on the REST Data Services and Connect to ORDS:


You'll see the current configuration.

In order to connect to the new database we need to add a Database. Right click in Database Settings and add a new database. Before writing it back (the icon with the green up arrow - click the Test Settings button first (the icon with a v) to make sure everything is fine.

Final step is to let ORDS know that if we put in our url /apex50 we want to connect to the new database. You can do that by adding an entry in URL Mapping:


That's it...

Note: sometimes I've issues with adding the database and URL Mapping in SQL Developer, but it's as fast to do it command line too. The doc has a great example which commands to run: https://docs.oracle.com/cd/E37099_01/doc.20/e25066/config.htm#AELIG7191


Step 4. Test

When you navigate to your normal url e.g. http://localhost/ords/f?p=ABC you will see your APEX 4.2 instance, but if you navigate to http://localhost/ords/apex50/f?p=ABC you'll see the APEX 5.0 instance.

You can play a bit more with making it nicer urls or do some redirects in Apache, but I hope you get the idea how to start testing APEX 5.0 while still running APEX 4.2 too.

Categories: Development

Oracle Priority Support Infogram for 29-SEP-2015

Oracle Infogram - Thu, 2015-01-29 17:25

RDBMS
Oracle Database 12c Patching: DBMS_QOPATCH, OPATCH_XML_INV, and datapatch, from Pythian.
Oracle Technology
Top 10 OTN ArchBeat YouTube Videos - Dec 29, 2014 – Jan 27, 2015, from ArchBeat.
GoldenGate
Oracle GoldenGate 12c for Oracle Database - Integrated Capture sharing capture session, from Data Integration.
MySQL
MySQL Enterprise Monitor 3.0.19 has been released, from MySQL Enterprise Tools Blog.
ZFS
Data Encryption ... Be Safe or Be Sorry, from The Wonders of ZFS Storage.
Solaris
Solaris Studio 12.4: The new release comes with sharper tools, from the EMEA Midsize Blog.
SOA
SOA Magazine III published, from WebLogic Partner Community EMEA
Desktop application for purging SOA instances, from Arda Eralp's Weblog.
Oracle SOA Suite 12c: The Coherence Adapter, from the AMIS Technology Blog.
BI
Fact Table Partitioning with Oracle BI Applications, from the Oracle BI applications blog.
MAF
Updating MAF Connection Targets At Run Time, from the Oracle A-Team Chronicles.
Identity Management
From the Identity Management blog: Putting the dots together: How to provide compliance and individual accountability with Oracle Privileged Account Manager.
EBS
From Oracle E-Business Suite Technology:
Database 12.1.0.1 Certified with EBS 12.2 on Itanium and AIX
EBS Error Correction Policy Updated for EBS 12.2
From Oracle E-Business Suite Support Blog:
See the New and Improved Buyer Work Center (BWC) Assistant
Personalize Your Mobile Apps!
Introducing the New Proactive Dynamic Events Calendar
Stay Current: EBS 12.2 Upgrade, Features & Functionality & How to find EBS Patches!
Obtaining Bonus Depreciation Methods for Oracle Fixed Assets
…And Finally
This Chart Shows How Long It Takes to Get from an Airport to the City
Yes, Please: An Algorithm for Fact Checking the Internet, from MOTHERBOARD.
One potentially massive preliminary discovery impacting astrophysics, one no doubt about it huge move in technology, 3-D printing with metal:
Speed of light not so constant after all, from ScienceNews.

First Of Its Kind In Israel: 3D Metal Printer Gets To Work At Technion, from nocamel.

Advantages of using REST-based Integrations in PeopleSoft

Javier Delgado - Thu, 2015-01-29 16:46
REST-based services support were introduced in PeopleTools 8.52, although you may also build your own REST services using IScripts in previous releases (*). With PeopleTools 8.52, Integration Broker includes support for REST services, enabling PeopleSoft to act as both a consumer and a provider.

What is REST?
There is plenty of documentation in the Web about REST, its characteristics and benefits. I personally find the tutorial published by Dr. Elkstein (http://rest.elkstein.org) particularly illustrating.

In a nutshell, REST can be seen as a lightweight alternative to other traditional Web Services mechanisms such as RPC or SOAP. A REST integration has considerably less overhead than the two previously mentioned methods, and as a result is more efficient for many types of integrations.

Today, REST is the dominating standard for mobile applications (many of which use REST integrations to interact with the backend) and Rich Internet Applications using AJAX.

PeopleSoft Support
As I mentioned before, PeopleSoft support was included in PeopleTools 8.52. This included the possibility to use the Provide Web Service Wizard for REST services on top of the already supported SOAP services. Also, the Send Master and Handler Tester utilities were updated so they could be used with REST.

PeopleTools 8.53 delivered support for one of the most interesting features of REST GET integrations: caching. Using this feature, PeopleSoft can, as a service provider, indicate that the response should be cached (using the SetRESTCache method of the Message object). In this way, the next time a consumer asks for the service, the response will be retrieved from the cache instead of executing the service again. This is particularly useful when the returned information does not change very often (ie.: list of countries, languages, etc.), and can lead to performance gains over a similar SOAP integration.

PeopleTools 8.54 brought, as in many other areas, significant improvements to the PeopleSoft support. In first place, the security of inbound services (in which PeopleSoft acts as the provider) was enhanced to require that the services are consumed using SSL, basic HTTP authentication, and basic HTTP authentication and SSL, or none of these.

On top of that, Query Access Services (QAS) were also made accessible through REST, so the creation of new provider services can be as easy as creating a new query and exposing it to REST.

Finally, the new Mobile Application Platform (an alternative way to FLUID to mobilise PeopleSoft contents) also uses REST as a cornerstone.

Conclusions
Although REST support is relatively new compared to SOAP web services, it has been supported by PeopleSoft for a while now. Its efficiency and performance (remember GET services caching) makes it an ideal choice for multiple integration scenarios. I'm currently building a mobile platform that interacts with PeopleSoft using REST services. This is keeping me busy and you may have noticed that I'm not posting so regularly in this blog, but hopefully in some time from now I will be able to share with you some learned lessons from a large scale REST implementation.


(*) Although it's possible to build REST services using IScripts, the Integration Broker solution introduced in PeopleTools 8.52 is considerably easier to implement and maintain. So, if you are in PeopleTools 8.52 release or higher, Integration Broker would be the preferred approach. If you are in an earlier release, actually a PeopleTools upgrade would the preferred approach, but I understand there might be other contraints. :)

2 Cybersecurity considerations Obama made in his address [VIDEO]

Chris Foot - Thu, 2015-01-29 15:42

Hi, welcome to RDX! If you didn’t catch President Obama’s State of the Union address, cybersecurity was a serious topic of discussion. To improve the United States’ ability to combat cybercriminals, Obama made two recommendations.

First, Obama maintained hackers should be charged with penalties associated with the Racketeer Influenced and Corrupt Organizations Act, or RICO, according to Dark Reading. This measure would make it easier for prosecutors and investigators to gather evidence on suspects and identify whether a larger conspiracy is at play.

In addition, Obama also advocated for the expansion of the Computer Fraud and Abuse Act. Specifically, the president wants the law to apply to people who access machines for unauthorized reasons.

If you have any questions as to how these proposed mandates would apply to your business, contact a team of security experts to give you a breakdown. Thanks for watching!

The post 2 Cybersecurity considerations Obama made in his address [VIDEO] appeared first on Remote DBA Experts.

Is purchasing security technology enough? [VIDEO]

Chris Foot - Thu, 2015-01-29 15:40

Transcript

Hi, welcome to RDX! In the wake of recent data breaches, it’s likely that you’ve considered purchasing a list of cybersecurity assets. But shouldn’t a portion of your budget be used to acquire services?

While malware detection programs and network protection devices are components of a larger data security strategy, if in-house staff can’t dedicate the time needed to fully utilize these technologies, vulnerabilities will continue to exist.

A survey of small, midsize and large enterprises conducted by Osterman Research discovered that 30 percent of all new security investments were either underutilized or neglected entirely.

Before purchasing new technology to defend your critical systems, it’s best to consult a team of experts that can inform you of which assets will add value to your arsenal.

Thanks for watching! Be sure to check in next time for more security tips!

The post Is purchasing security technology enough? [VIDEO] appeared first on Remote DBA Experts.

Birst: an attractive "all-in-one" Business Intelligence solution

Yann Neuhaus - Thu, 2015-01-29 09:02

The review of the Gartner Magic Quadrant for Business Intelligence and Analytics Platforms has revealed a new challenger that could become one of the leader of this market: Birst - an interesting "all-in-one" BI solution.

 

Gartner_magic_quadrant_bi_2014_20150123-152555_1.jpg

 

Birst is an american company based in the Silicon Valley near San Francisco. Its SaaS (Software as a Service) BI solution mixes simplicity and power.

Just a little reminder before going forward: a complete Business Intelligence solution has two kind of tools:

 

b2ap3_thumbnail_BI_schema.jpg


  • Back-end BI tools: these tools are used to load and transform the data before using it for reporting
  • Front-end BI tools: these tools are used by end user to do the reporting (creating reports, dashboards, drill down reports …)

So what are the assets of Birst’s BI Solution?


Regarding the ergonomics of the solution

All in One: All the BI leader solutions on the market are using more than one applications for back-end and front-end tools. Birst is the only one to provide all the tools using one interface.

Online product: Birst is an online solution. No installation on a device is needed. You can save your data, reports, and dashboard in your own "space" (in the example above, the name of the space is "SupremEats DEV) located in the cloud.

 

Regarding the back-end tool (ETL, semantic layer, …)

Cloud connectivity. As a cloud-based solution, Birst can load data from the cloud using special connectors. So you can very easilymix  your own business data with cloud data.

Tasks automation. The Birst admin application (used to design the data model) uses a lot of automated tasks especially regarding the generation of the star schema or business model:

  • Automatic generation from the joins in regard to the different facts and dimension tables
  • Automatic generation from the time dimension (year, quarter, month, ...)
  • Automatic generation from the measures with context (e. g. sum of revenue, max revenue, min revenue, ...)

Time development reduction. The data modelling tasks are very time consuming in a Business Intelligence project. The automation of such tasks can very much decrease the time of development. Of course, Birst has the possibility to create or transform its own schema like a traditional ETL tool.

 

Regarding the front-end tool (report, dashboard, navigation)

Intuitive interface. For the end-user, the simplicity of the report / dashboard creation interface is another advantage of this solution. All the creation steps are guided by an assistant. If you compare it with other products like QlikView or Webi from SAP Business Objects, the interface is easy to learn:


b2ap3_thumbnail_Qlik_BO_Birst_Comparison.jpg


Powerful reporting tool. However, it remains a high-performance product with the possibility of creating professional and complex reports or dashboards:

 

b2ap3_thumbnail_Birst_reports_examples_1.jpgb2ap3_thumbnail_Birst_reports_examples_2.jpgb2ap3_thumbnail_Birst_reports_examples_3.jpg

Multi saving format options: Dashboards and reports can be saved using different format (saving in PDF format, XLS export, creation of a Microsoft Powerpoint, presentation ...).

Scheduling option for the end user: The end user has the possibility to publish his reports using schedule options.

b2ap3_thumbnail_Birst_scheduling_options.jpg

Conclusion

Birst could become a future leader in BI solutions. Combining both simplicity and power, Birst can seduce a lot of mid-sized enterprises or business units within large enterprises with small- or medium-sized BI budgets.