Skip navigation.

Feed aggregator

Database Flashback -- 1

Hemant K Chitale - Sun, 2015-02-01 09:25
A first post on Database Flashback.

Enabling Database Flashback in 11.2 non-RAC

SQL*Plus: Release Production on Sun Feb 1 23:13:17 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select version, status, database_status
2 from v$instance;

----------------- ------------ ----------------- OPEN ACTIVE

SYS>select flashback_on, database_role
2 from v$database;

------------------ ----------------

SYS>show parameter db_recovery_file

------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /home/oracle/app/oracle/flash_
db_recovery_file_dest_size big integer 3852M
SYS>select * from v$flash_recovery_area_usage;

-------------------- ------------------ ------------------------- ---------------
REDO LOG 0 0 0
ARCHIVED LOG .95 .94 5
BACKUP PIECE 28.88 .12 5

7 rows selected.


So, the above output shows that the database is OPEN but Flashback is not enabled.
Let me enable Flashback now.
SYS>alter database flashback on;

Database altered.

SYS>select flashback_on from v$database;


SYS>select * from v$flash_recovery_area_usage;

-------------------- ------------------ ------------------------- ---------------
REDO LOG 0 0 0
ARCHIVED LOG .95 .94 5
BACKUP PIECE 28.88 .12 5

7 rows selected.


Immediately after enabling Flashback, Oracle shows usage of the FRA for Flashback Logs. Note : Although 11.2 allows you to enable Flashback in an OPEN Database, I would suggest doing so when the database is not active.

Categories: DBA Blogs

Moving My Beers From Couchbase to MongoDB

Tugdual Grall - Sun, 2015-02-01 09:01
See it on my new blog : here Few days ago I have posted a joke on Twitter Moving my Java from Couchbase to MongoDB — Tugdual Grall (@tgrall) January 26, 2015 So I decided to move it from a simple picture to a real project. Let’s look at the two phases of this so called project: Moving the data from Couchbase to MongoDB Updating the application code to use Tugdual Grall

The Fretboard - a multi-project build status monitor

Paul Gallagher - Sun, 2015-02-01 07:23
(blogarhythm ~ Diablo - Don't Fret)

The Fretboard is a pretty simple Arduino project that visualizes the build status of up to 24 projects with an addressable LED array. The latest incarnation of the project is housed in an old classical guitar … hence the name ;-)

All the code and design details for The Fretboard are open-source and available at Feel free to fork or borrow any ideas for your own build. If you build anything similar, I'd love to hear about it.

Test Drive Oracle APEX 5.0 Early Adopter 3!

Patrick Wolf - Sun, 2015-02-01 05:49
It’s here! Oracle Application Express 5.0 Early Adopter 3 is available at and we are really looking forward to get your feedback (via the Feedback link in the Builder)! A list of all new features of EA3 and a … Continue reading →
Categories: Development

Information technology for personal safety

DBMS2 - Sun, 2015-02-01 05:34

There are numerous ways that technology, now or in the future, can significantly improve personal safety. Three of the biggest areas of application are or will be:

  • Crime prevention.
  • Vehicle accident prevention.
  • Medical emergency prevention and response.

Implications will be dramatic for numerous industries and government activities, including but not limited to law enforcement, automotive manufacturing, infrastructure/construction, health care and insurance. Further, these technologies create a near-certainty that individuals’ movements and status will be electronically monitored in fine detail. Hence their development and eventual deployment constitutes a ticking clock toward a deadline for society deciding what to do about personal privacy.

Theoretically, humans aren’t the only potential kind of tyrants. Science fiction author Jack Williamson postulated a depressing nanny-technology in With Folded Hands, the idea for which was later borrowed by the humorous Star Trek episode I, Mudd.

Of these three areas, crime prevention is the furthest along; in particular, sidewalk cameras, license plate cameras and internet snooping are widely deployed around the world. So let’s consider the other two.

Vehicle accident prevention

Suppose every automobile on the road “knew” where all nearby vehicles were, and their speed and direction as well. Then it could also “know” the safest and fastest ways to move you along. You might actively drive, while it advised and warned you; it might be the default “driver”, with you around to override. Inbetween possibilities exist as well.

Frankly, I don’t know how expensive a suitably powerful and rugged transponder for such purposes would be. I also don’t know to what extent the most efficient solutions would involve substantial investment in complementary, stationary equipment. But I imagine the total cost would be relatively small compared to that of automobiles or auto insurance.

Universal deployment of such technology could be straightforward. If the government can issue you license plates, it can issue transponders as well, or compel you to get your own. It would have several strong motivations to do so, including:

  • Electronic toll collection — this is already happening in a significant fraction of automobiles around the world.
  • Snooping for the purpose of law enforcement.
  • Accident prevention.
  • (The biggest of all.) Easing the transition to autonomous vehicles.

Insurance companies have their own motivations to support safety-related technology. And the automotive industry has long been aggressive in incorporating microprocessor technology. Putting that all together, I am confident in the prediction: Smart cars are going to happen.

The story goes further yet. Despite improvements in safety technology, accidents will still happen. And the same location-tracking technology used for real-time accident avoidance should provide a massive boost to post-accident forensics, for use in:

  • Insurance adjudication (obviously and often),
  • Criminal justice (when the accident has criminal implications), and
  • Predictive modeling.

The predictive modeling, in turn, could influence (among other areas):

  • General automobile design (if a lot of accidents have a common cause, re-engineer to address it).
  • Maintenance of specific automobiles (if the car’s motion is abnormal, have it checked out).
  • Individual drivers’ insurance rates.

Transportation is going to change a lot.

Medical emergency prevention and response

I both expect and welcome the rise of technology that helps people who can’t reliably take care of themselves (babies, the elderly) to be continually monitored. My father and aunt might each have lived longer if such technology had been available sooner. But while the life-saving emergency response uses will be important enough, emergency avoidance may be an even bigger deal. Much as in my discussion above of cars, the technology could also be used to analyze when an old person is at increasing risk of falls or other incidents. In a world where families live apart but nursing homes are terrible places, this could all be a very important set of developments.

Another area where the monitoring/response/analysis/early-warning cycle could work is cardio-vascular incidents. I imagine we’ll soon have wearable devices that help detect the development or likelihood of various kinds of blockages, and hence forestall cardiovascular emergencies, such as those that often befall seemingly-healthy middle-aged people. Over time, I think those devices will become pretty effective. The large market opportunity should be obvious.

Once life-and-death benefits lead the way, I expect less emergency-focused kinds of fitness monitoring to find receptive consumers as well. (E.g. in the intestinal/nutrition domain.) And so I have another prediction (with an apology to Socrates): The unexamined life will seem too dangerous to continue living.

Trivia challenge: Where was the wordplay in that last paragraph?

Related links

  • My overview of innovation opportunities ended by saying there was great opportunity in devices. It also offered notes on predictive modeling and so on.
  • My survey of technologies around machine-generated data ended by focusing on predictive modeling for problem and anomaly detection and diagnosis, for machines and bodies alike.
  • The topics of this post are part of why I’m bullish on machine-generated data growth.
  • I think soft robots that also provide practical assistance could become a big part of health-related monitoring.
Categories: Other

The Year 2014 in Review

Duncan Davies - Sat, 2015-01-31 17:15

Once each year completes I try to do a run down of which posts have garnered the most attention during the last 12 months, both on this blog and on the PeopleSoft Weekly newsletter. Here are the most viewed stories from 2014.

PeopleSoft Weekly Newsletter

During 2014 the subscriber base grew from 602 to 914 (a net increase of just over 300), and this was despite being asked to remove over 50 subscribers when I left my previous employer. The open rate (i.e. the amount of people that view each email) continues to cover around the 45-50% mark – against an industry average of 17%.

There were nearly 800 links in the newsletters over the year. Here are the 30 that received the most clicks:

  1. Tools to increase productivity for PeopleSoft Developers and Users (124 unique visits)
  2. PeopleSoft 9.3 – A Clarification (120)
  3. How to know Record/Field names without opening Application Designer (120)
  4. PeopleSoft 8.53/9.2 Prototyping Tool (118)
  5. This is why you never end up hiring good developers (115)
  6. Compiling PeopleCode (108)
  7. It’s official, there is no PeopleSoft 9.3 (104)
  8. I’m Jim Marion and this is how I work (104)
  9. An Introduction to PeopleTools 8.54 (part 1) (101)
  10. PeopleTools 8.54 Sandbox (99)
  11. An Introduction to PeopleTools 8.54 (part 2) (93)
  12. How to Effectively Write PeopleSoft Queries (91)
  13. 17 Famous Logos That Have A Hidden Message (91)
  14. The PeopleSoft Roadshow 2014 – What’s coming next for PeopleSoft? (90)
  15. Oracle bests Rimini Street in latest lawsuit ruling (90)
  16. Google releases set of beautiful, freely usable icons (89)
  17. 10 Tricks to Appear Smart During Meetings (87)
  18. Disabling the back button in PeopleSoft (87)
  19. PeopleSoft Process Flow basics (86)
  20. 23 Evergreen Developer Skills to Keep you Employed Forever (85)
  21. I’m Simon Wilson and this is how I work (83)
  22. PeopleCode Coding Discipline (83)
  23. PeopleSoft Application Designer themeing (82)
  24. Make a Field Required using PeopleCode (82)
  25. I’m Anton de Weger and this is how I work (81)
  26. Unlimited Session Timeout (79)
  27. Upgrading PeopleTools with Zero Downtime (78)
  28. Creating a Launch Pad Experience with the PeopleSoft Interaction Hub (77)
  29. California payroll: Another failure waiting to happen? (76)
  30. PeopleCode Record Snapshot (75)

One of the most pleasing things about the above list is that there is a wide variety of blogs from many different bloggers and companies featured, which shows that the PeopleSoft eco-system is as active as ever. It is a good thing that too many of the less serious ‘and one more thing’ links didn’t manage to get into the top 30! I’m also rather pleased that all 3 of the 2014 ‘how i work’ posts made the top 30 … it goes to show that people are as important as software.

PeopleSoft Tipster Blog

In 2014 this blog had 144,203 views, coming from 80,833 visitors. The most visited posts are all above so I won’t repost another list, but to give you an idea of numbers the top one (about PeopleSoft 9.3) got more than 5,000 views in the year and the PeopleSoft Roadshow one more than 3,000.

Most of the visitors came from the US (79,200 views or 63%) followed by India (16%), Canada (6%) and the UK (5%).

I’m keen to do a few more ‘How I Work’ posts. If there’s someone that merits inclusion that hasn’t appeared so far please let me know. (My email is in the Get In Contact pane in the top right.)

APEX 5.0 EA3 released with Universal Theme

Dimitri Gielis - Sat, 2015-01-31 16:30
Today Oracle released the latest Early Adopter release of APEX 5.0.

The release is packed with new features, but one of the biggest improvements you can find in the UI.

For me the Universal Theme with Theme Roller, Theme Styles and Template Options is the absolute killer feature of APEX 5.0. I can't better describe Universal Theme as how Oracle is describing it:

Universal Theme is an all-new user interface for your applications that has been built from the ground up for Application Express 5. It is a simpler, yet more capable theme that does away with excessive templates and enable effortless customization with Theme Roller, Template Options, and Theme Styles. Universal Theme is the culmination of all of our UI enhancements and aims to empower developers to build modern, responsive, sophisticated applications without requiring expert knowledge of HTML, CSS, or JavaScript.

The issue with APEX 4.x was the build-in themes became out-dated today, so many of us looked at integrating Bootstrap, Foundation or other frameworks. But APEX 5.0 has send us at WARP-speed to a new UI, suddenly out-of-the-box your applications are modern, responsive, flat, ... everything you can expect from a UI today. Many new components are now part of the UI: like the new Carousel and Cards region are some of my favourites.

If you are interested to know everything about the new UI, Roel and I teamed up with a one-day APEX 5.0 UI Training day.

Happy playing with APEX 5.0 EA3 and don't forget to give feedback to the development team, this is the last time you can before APEX 5.0 hits production.
Categories: Development

MAF 2.1 - Debugger Improvements and Mobile REST Client

Andrejus Baranovski - Sat, 2015-01-31 11:43
I was blogging previously about Oracle Mobile Suite and REST service transformation from ADF BC SOAP service. Today I'm going to blog the next step in the same series - MAF client consuming REST service exposed from Oracle Mobile Suite ESB. I'm going to highlight improvements in debugging process for MAF, provided with the latest 2.1 release. I had couple of challenges implementing and mapping programmatic MAF client for REST service, all these were solved and I would like to share the solution with you.

Previous posts on the same topic:

- Oracle Mobile Suite Service Bus REST and ADF BC SOAP

- How To Add New Operation in Oracle Mobile Suite Service Bus REST Service

Here you can download sample application implemented for today post (contains ADF BC SOAP, Mobile Suite Service Bus and MAF client apps) -

High level view of the implemented use case - there are three parts. Service Bus is acting as a proxy to transform ADF BC SOAP service to light REST consumed by mobile REST client (implemented in MAF):

I would not recommend to use auto generated Data Control for REST service - I didn't had positive experience using it. You should invoke REST service from Java, using MAF helper classes (read - here). There must be REST connection defined and tested, later it can be accessed and used directly from Java. REST connection endpoint must include REST service URL (in my case, it by default points to getEmployeesListAll operation):

It is enough to define REST connection, now we can start using it.

Sample application implements REST helper class. Here actual REST call is made through MAF RestServiceAdapter class - see invokeRestService method. Public method invokeFind(requestURI) is supposed to be used to initiate search request. Content type is set to be JSON. Search parameter is always passed as a part of requestURI, therefore we are passing empty post data value:

MAF application contains Java class - ServiceHelper, this class is used to implement Data Control. Each time when search parameter is set, applyFilter method is invoked and it makes REST request. Response is received and with the help of MAF utility (highlighted) is transformed from String to the array of objects:

ServiceHelper class includes a set of variables required to perform successful REST request. Data collection object - employees of type EmployeeBO[] is defined here and later exposed through Data Control:

As you can see from above example of applyFilter method - response String is transformed to the array of objects. I'm using EmployeesList class for conversion. This class contains array of objects to parse from REST response. It is very important to have exactly the same name for array variable as your REST response result set name. In my example, this name is employees:

Finally there is class to represent response object - EmployeeBO. This class is a POJO with attribute definitions:

ServiceHelper class is exposed to Data Control - defined methods will be accessible from MAF bindings layer:

This is all about REST client in MAF implementation - I would say quite straightforward, when you are on the right path.

Let's have a few words about debugger in MAF 2.1. I would say I was positively impressed with debugger improvements. There is no need to do any extra configuration steps anymore, simply right click anywhere in Java class you are currently working on and choose Debug option - Debugger will start automatically and MAF application will be loaded in iOS simulator.

It works with all defaults, you could go and double check the settings in Run/Debug section of MAF project - Mobile Run Configuration window:

Right click and choose debug, don't forget to set a breakpoint:

Type any value to search in the sample MAF application - Employees form. You should see breakpoint is activated in JDEV (as it calls applyFilter method):

Debugger displays value we are searching for:

After conversion from response String to array of objects happens, we could inspect the collection and check if conversion happened correctly:

Response from REST is successfully displayed in MAF UI:

Endeca Information Discovery Integration with Oracle BI Apps

Rittman Mead Consulting - Sat, 2015-01-31 10:46

One of the new features that made its way into Oracle BI Apps, and that completely passed me by at the time, was integration with Oracle Endeca Information Discovery 3.1 via a new ODI11g plug-in. As this new integration brings the Endeca “faceted search” and semi-structured data analysis capabilities to the BI Apps this is actually a pretty big deal, so let’s take a quick look now at what this integration brings and how it works under-the-covers.

Oracle BI Apps integration with Oracle Endeca Information Discovery 3.1 requires OEID to be installed either on the same host as BI Apps or on its own server, and uses a custom ODI11g KM called “IKM SQL to Endeca Server 3.1.0” that needs to be separately downloaded from Oracle’s Edelivery site. The KM download also comes with an ODI Technology XML import file that adds Endeca Server as a technology type in the ODI Topology Navigator, and its here that the connection to the OEID 3.1 server is set up for subsequent data loads from BI Apps 11g.


As the IKM has “SQL” as the loading technology specified this means any source that can be extracted from using standard JDBC drivers can be used to load the Endeca Server, and the KM itself has options for creating the schema to go with the data upload, creating the Endeca Server record spec, specifying the collection name and so on. BI Apps ships with a number of interfaces (mappings) and ODI load plans to populate Endeca Server domains for specific BI Apps fact groups, with each interface taking all of the relevant repository fact measures and dimension attributes and loading them into one big Endeca Server denormalized schema.


When you take a look at the physical (flow) tab for the interface, you can see that the data is actually extracted via the BI Apps RPD and the BI Server, then staged in the BI Apps DW database before being loaded into the Endeca Server domain. Column metadata in the BI Server repository is then used to define the datatypes, names, sizes and other details of the Endeca Server domain schema, an improvement over just pointing Endeca Information Discovery Integrator at the BI Apps database schema.


The ODI repository also provides a pre-built Endeca Server load plan that allows you to turn-on and turn-off loads for particular fact groups, and sets up the data domain name and other variables needed for the Endeca Server data load.


Once you’ve loaded the BI Apps data into one or more Endeca Server data domains, there are a number of pre-build sample Endeca Studio applications you can use to get a feel for the capabilities of the Endeca Information Discovery Studio interface. For example, the Student Information Analytics Admissions and Recruiting sample application lets you filter on any attribute from the dataset on the left-hand side, and then returns instantaneously the filtered dataset displayed in the form of a number of graphs, word clouds and other visualizations.


Another interesting sample application is around employee expense analysis. Using features such as word clouds, its relatively easy to see which expense categories are being used the most, and you can use Endeca Server’s search and text parsing capabilities to extract keywords and other useful bits of information out of free-text areas, for example supporting text for expense claims.


Typically, you’d use Endeca Information Discovery as a “dig around the data, let’s look for something interesting” tool, with all of your data attributes listed in one place and automatic filtering and focusing on the filtered dataset using the Endeca Server in-memory search and aggregation engine. The sample applications that ship with are just meant as a starting point though (and unlike the regular BI Apps dashboards and reports, aren’t themselves supported as an official part of the product), but as you can see from the Manufacturing Endeca Studio application below, there’s plenty of scope to create returns, production and warranty-type applications that let you get into the detail of the attribute and textual information held in the BI Apps data warehouse.


More details on the setup process are in the online docs, and if you’re new to Endeca Information Discovery take a look at our past articles on the blog on this product.

Categories: BI & Warehousing

Getting started with iOS development using Eclipse and Java

Shay Shmeltzer - Fri, 2015-01-30 16:05

Want to use Eclipse to build an on-device mobile application that runs on iOS devices (iPhones and iPads)?

No problem - here is a step by step demo on how to do this:

Oh, and by the way the same app will function also on Android without any changes to the code :-)  

This is an extract from an online seminar that I recorded for one of Oracle's Virtual Technology Summits - and I figured people who didn't sign up for that event might still benefit from having access to the demo part of the video.

In the demo I show how to build an on-device app that access local data as well as remote data through web services, and how easy it is to integrate device features too.

If you want to try this on your own, get a copy of the Oracle Enterprise Pack for Eclipse, and follow the setup steps in the tutorial here.

And then just follow the video steps.

The location of the web service I accessed is at:

And the Java classes I use to simulate local data are  here.

Categories: Development

What's new in PostgreSQL 9.4?

Chris Foot - Fri, 2015-01-30 15:02

Hi, welcome to RDX! The PostgreSQL Global Development Group recently unveiled PostgreSQL 9.4. The open source community maintains this iteration reinforces the group’s three core values: flexibility, scalability and performance.

In previous versions, JSON only allowed data to be stored in plain text format. In contrast, JSONB could only be entered in binary. Now, PostgreSQL 9.4 can use either relational or non-relational data stores simultaneously.

PostgreSQL’s Generalized Inverted Indexes are also three times faster. Speaking of speed, 9.4 comes with expedited parallel writing to the engine’s transaction log. In addition, users can rapidly reload the database cache on restart by using the pg_prewarm command.

Another notable feature is 9.4’s support for Linux Huge Pages for servers with large memory. This capability reduces overhead, and can be implemented by setting huge_pages to “on.”

Thanks for watching! Visit us next time for more PostgreSQL news!

The post What's new in PostgreSQL 9.4? appeared first on Remote DBA Experts.

Growth in machine-generated data

DBMS2 - Fri, 2015-01-30 13:31

In one of my favorite posts, namely When I am a VC Overlord, I wrote:

I will not fund any entrepreneur who mentions “market projections” in other than ironic terms. Nobody who talks of market projections with a straight face should be trusted.

Even so, I got talked today into putting on the record a prediction that machine-generated data will grow at more than 40% for a while.

My reasons for this opinion are little more than:

  • Moore’s Law suggests that the same expenditure will buy 40% or so more machine-generated data each year.
  • Budgets spent on producing machine-generated data seem to be going up.

I was referring to the creation of such data, but the growth rates of new creation and of persistent storage are likely, at least at this back-of-the-envelope level, to be similar.

Anecdotal evidence actually suggests 50-60%+ growth rates, so >40% seemed like a responsible claim.

Related links

Categories: Other

3 new things about sdsql

Kris Rice - Fri, 2015-01-30 12:30
New Name ! The first is a new name this EA it's named sqlcl for sql command line.  However, the binary to start it up is simply sql.  Nothing is easier when you need to run some sql than typing 'sql' and hitting enter. #./sql klrice/klrice@//localhost/orcl SQLcl: Release 4.1.0 Beta on Fri Jan 30 12:53:05 2015 Copyright (c) 1982, 2015, Oracle. All rights reserved. Connected to: Oracle

Oracle Audit Vault - Oracle Client Identifier and Last Login

Several standard features of the Oracle database should be kept in mind when considering what alerts and correlations are possible when combining Oracle database and application log and audit data.

Client Identifier

Default Oracle database auditing stores the database username but not the application username.  In order to pull the application username into the audit logs, the CLIENT IDENTIFIER attribute needs to be set for the application session which is connecting to the database.  The CLIENT_IDENTIFIER is a predefined attribute of the built-in application context namespace, USERENV, and can be used to capture the application user name for use with global application context, or it can be used independently. 

CLIENT IDENTIFIER is set using the DBMS_SESSION.SET_IDENTIFIER procedure to store the application username.  The CLIENT IDENTIFIER attribute is one the same as V$SESSION.CLIENT_IDENTIFIER.  Once set you can query V$SESSION or select sys_context('userenv','client_identifier') from dual.

The table below offers several examples of how CLIENT_IDENTIFIER is used.  For each example, for Level 3 alerts, consider how the value of CLIENT_IDENTIFIER could be used along with network usernames, enterprise applications usernames as well as security and electronic door system activity logs.



Example of how used

E-Business Suite

As of Release 12, the Oracle E-Business Suite automatically sets and updates client_identifier to the FND_USER.USERNAME of the user logged on.  Prior to Release 12, follow Support Note How to add DBMS_SESSION.SET_IDENTIFIER(FND_GLOBAL.USER_NAME) to FND_GLOBAL.APPS_INITIALIZE procedure (Doc ID 1130254.1)


Starting with PeopleTools 8.50, the PSOPRID is now additionally set in the Oracle database CLIENT_IDENTIFIER attribute. 


With SAP version 7.10 above, the SAP user name is stored in the CLIENT_IDENTIFIER.

Oracle Business Intelligence Enterprise Edition(OBIEE)

When querying an Oracle database using OBIEE the connection pool username is passed to the database.  To also pass the middle-tier username, set the user identifier on the session.  To do this in OBIEE, open the RPD, edit the connection pool settings and create a new connection script to run at connect time.  Add the following line to the connect script:




Last Login

Tracking when database users last logged in is a common compliance requirement.  This is required in order to reconcile users and cull stale users.  New with Oracle12c, Oracle provides this information for database users.  The system table SYS.DBA_USERS has a column, last_login. 


select username, account_status, common, last_login

from sys.dba_users

order by last_login asc;






















If you have questions, please contact us at

Reference Tags: AuditingOracle Audit VaultOracle Database
Categories: APPS Blogs, Security Blogs

OTNYathra 2015

Oracle in Action - Fri, 2015-01-30 05:23

RSS content

The Oracle ACE directors and Java champions will be organizing an evangelist event called ‘OTNYathra 2015’  during February 2015. during which a series of 7 conferences will be held across 7 major cities of India  in a time period of 2 weeks.  This event will bring the Oracle community together, spread the knowledge and increase the networking opportunities in the region. The detailed information about the event can be viewed at

I will be presenting a session on Adaptive Query Optimization on 13th Feb 2015 at FMDI, Sector 17B, IFFCO Chowk , Gurgaon.

Thanks to Sir Murali Vallath  and his team for organizing it and giving me an opportunity to present.

Hope to see you there!!



Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [OTNYathra 2015], All Right Reserved. 2015.

The post OTNYathra 2015 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

SQL Server: Online index rebuild & minimally logged operations

Yann Neuhaus - Fri, 2015-01-30 01:37

A couple of days ago, I encountered an interesting case with a customer concerning a rebuild index operation on a datawarehouse environment. Let me introduce the situation: a “usual DBA day” with an almost usual error message found in your dedicated mailbox: “The transaction log for database 'xxx' is full”. After checking the concerned database, I notice that its transaction log has grown up and has fulfilled the entire volume. In the same time, I also identify the root cause of our problem: an index rebuild operation performed last night that concerns a big index (approximately 20 GB in size) on a fact table. On top of all, the size of the transaction log before raising the error message was 60 GB.

As you know, on datawarehouse environment, the database recovery model is usually configured either to SIMPLE or BULK_LOGGED to minimize write operations of bulk activity and of course the concerned database meets this requirement. According to the Microsoft document we could expect to get minimally logged records for index rebuild operations (ALTER INEX REBUILD) regardless the offline / online mode used to rebuild the index. So why the transaction log has grown heavily in this case?

To get a response we have first to take a look at the rebuild index tool used by my customer: the OLA script with INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE values for FragmentationHigh parameter. Don't worry OLA scripts work perfectly and the truth is out there :-) In the context of my customer, rebuild indexes online was permitted because the edition of the concerned SQL Server instance was Enterprise and this is precisely where we have to investigate here.

Let me demonstrate with a pretty simple example. On my lab environment I have a SQL Server 2014 instance with Enterprise edition. This instance hosts the well-known AdventureWorks2012 database with a dbo.bigTransactionHistory_rs1 table (this table is derived from the original script provided by Adam Machanic).

Here the current size of the AdventureWorks2012 database:


select        name as logical_name,        physical_name,        size / 128 as size_mb,        type_desc,        cast(FILEPROPERTY(name, 'SpaceUsed') * 100. / size as decimal(5, 2)) as [space_used_%] from sys.database_files




and here the size of the dbo.bigTransactionHistory_rs1 table:


exec sp_spaceused@objname = N'dbo.bigTransactionHistory_rs1'; go




Total used size: 1.1 GB

Because we are in SIMPLE recovery model, I will momentary disable the checkpoint process in order to have time to get log records inside the transaction log by using the traceflag 3505


dbcc traceon(3505, -1);


alter index pk_bigTransactionHistory on dbo.bigTransactionHistory_rs1 rebuild with (online = off, maxdop = 1); go


Let's check the size of transaction log of the AdventureWorks2012 database





-- to initiate a tlog truncation before rebuilding the same index online Checkpoint;   alter index pk_bigTransactionHistory on dbo.bigTransactionHistory_rs1 rebuild with (online = off, maxdop = 1); go


Let's check again the size of the transaction log of the AdventureWorks2012 database:




It is clear that we have an obvious difference of size concerning the transaction log for each operation.

- offline: 4096 * 0.35% = 14MB - online: 4096 * 5.63% = 230MB.

Stay curious and let's have a look deeper at the records written inside the transaction log for each mode by using the undocumented function sys.fn_dblog() as follows:


select        COUNT(*) as nb_records,        SUM([Log Record Length])/ 1024 as kbytes from sys.fn_dblog(NULL, NULL); go


Offline Online  blog_28_4_tlog_detail_offline_mode  blog_28_4_tlog_detail_online_mode


As expected we may notice a lot of records with index rebuild online operation comparing to the index rebuild offline operation (x21)

Let's continue looking at the operations performed by SQL Server during the index rebuild operation in both cases:


select        Operation,        COUNT(*) as nb_records,        SUM([Log Record Length])/ 1024 as kbytes from sys.fn_dblog(NULL, NULL) group by Operation order by kbytes desc go


Offline Online blog_28_5_tlog_detail_offline_mode_2 blog_28_5_tlog_detail_online_mode_2


The above picture is very interesting because we may again see an obvious difference between each mode. For example, if we consider the operations performed in the second case (on the right, some of them doesn't concern bulk activity as LOP_MIGRATE_LOCKS, LOP_DELETE_ROWS, LOP_DELETE_SPLITS, LOP_MODIFY_COLUMS an unknown allocation unit, which probably concerns the new structure. At this point I can't confirm it (I don't show here all details o these operations. I let you see by yourself). Furthermore, in the first case (on the left), the majority of operations concerns only LOP_MODIFY_OPERATION on the PFS page (context).

Does it mean that the online mode doesn't use minimaly mechanism for the whole rebuild process? I retrieved an interesting response from this Microsoft KB which confirms my suspicion.

Online Index Rebuild is a fully logged operation on SQL Server 2008 and later versions, whereas it is minimally-logged in SQL Server 2005. The change was made to ensure data integrity and more robust restore capabilities.

However I guess we don't have the same behavior than the FULL recovery model here. Indeed, there still exists a difference between SIMPLE / BULK_LOGGED and FULL recovery models in term of amount of log records generated. Here a picture of the transaction log size after rebuilding the big index online in full recovery model in my case:




Ouch! 230MB (SIMPLE / BULK-LOGGED) vs 7GB  (FULL). It is clear that using FULL recovery model with rebuild index online operations will have a huge impact on the transaction log compared to the SIMPLE / BULK-LOGGED recovery model. So the solution in my case consisted in switching to offline mode or at least reducing the online operation for the concerned index.

Happy maintenance!

WebLogic Maven Plugin - Simplest Example

Steve Button - Fri, 2015-01-30 00:34
I've seen a question or two in recent days on how to configure the weblogic maven plugin.

The official documentation is extensive ... but could be considered TL;DR for a quick bootstrapping on how to use it.

As a late friday afternoon exercise I just pushed out an example of a very simple project that uses the weblogic-maven-plugin to deploy a web module.  It's almost the simplest configuration that can be done of the plugin to perform deployment related operations of a project/module:

This relies on the presence of either a local/corporate repository that contains the set of weblogic artefacts and plugins - OR - you configure and use the Oracle Maven Repository instead.  

Example pom.xml

Log Buffer #408, A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2015-01-29 21:45

This Log Buffer Edition covers various innovative blog posts from various fields of Oracle, MySQL and SQL Server. Enjoy!!!


A user reported an ORA-01114 and an ORA-27069 in a 3rd party application running against an Oracle 11.1 database.

Oracle SOA Suite 12c: Multithreaded instance purging with the Java API.

Oracle GoldenGate for Oracle Database has introduced several features in Release

Upgrade to 12c and Plugin – one fast way to move into the world of Oracle Multitenant.

The Oracle Database Resource Manager (the Resource Manager) is an infrastructure that provides granular control of database resources allocated to users, applications, and services. The Oracle Database Resource Manager (RM) enables you to manage multiple workloads that are contending for system and database resources.

SQL Server:

Database ownership is an old topic for SQL Server pro’s.

Using T-SQL to Perform Z-Score Column Normalization in SQL Server.

The APPLY operator allows you to join a record set with a function, and apply the function to every qualifying row of the table (or view).

Dynamic Management Views (DMVs) are a significant and valuable addition to the DBA’s troubleshooting armory, laying bare previously unavailable information regarding the under-the-covers activity of your database sessions and transactions.

Grant Fritchey reviews Midnight DBA’s Minion Reindex, a highly customizable set of scripts that take on the task of rebuilding and reorganizing your indexes.


It’s A New Year – Take Advantage of What MySQL Has To Offer.

MySQL High Availability and Disaster Recovery.

MariaDB Galera Cluster 10.0.16 now available.

Multi-threaded replication with MySQL 5.6: Use GTIDs!

MySQL and the GHOST: glibc gethostbyname buffer overflow.

Categories: DBA Blogs

From 0 to Cassandra – An Exhaustive Approach to Installing Cassandra

Pythian Group - Thu, 2015-01-29 21:44

All around the Internet you can find lots of guides on how to install Cassandra on almost every Linux distro around. But normally all of this information is based on the packaged versions and omit some parts that are deemed essential for proper Cassandra functioning.

Note: If you are adding a machine to an existing Cluster please approach this guide with caution and replace the configurations here recommended by the ones you already have on your cluster, specially the Cassandra configuration.

Without further conversation lets start!


Start your machine and install the following:

  • ntp (Packages are normally ntp, ntpdata and ntp-doc)
  • wget (Unless you have your packages copied over via other means)
  • vim (Or your favorite text editor)

Retrieve the following packages

Set up NTP

This can be more or less dependent of your system, but the following commands should do it (You can check this guide also):

~$ chkconfig ntpd on
~$ ntpdate
~$ service ntpd start
Set up Java (Let’s assume we are doing this in /opt)

Extract Java and install it:

~$ tar xzf [java_file].tar.gz
~$ update-alternatives --install /usr/bin/java java /opt/java/bin/java 1 

Check that is installed:

~$ java -version
java version '1.7.0_75'
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)

Let’s put JNA into place

~$ mv jna-VERSION.jar /opt/java/lib
Set up Cassandra (Let’s assume we are doing this in /opt)


Extract Cassandra:

~$ tar xzf [cassandra_file].tar.gz

Create Cassandra Directories:

~$ mkdir /var/lib/cassandra
~$ mkdir /var/lib/cassandra/commitlog
~$ mkdir /var/lib/cassandra/data
~$ mkdir /var/lib/cassandra/saved_caches
~$ mkdir /var/log/cassandra
Configuration  Linux configuration
~$ vim /etc/security/limits.conf

Add the following:

root soft memlock unlimited
root hard memlock unlimited
root – nofile 100000
root – nproc 32768
root – as unlimited

CentOS, RHEL, OEL, set in the following in /etc/security/limits.d/90-nproc.conf:

* – nproc 32768

Add the following to the sysctl file:

~$ vim /etc/sysctl.conf
vm.max_map_count = 131072

Finally (Reboot also works):

~$ sysctl -p

Firewall, the following ports must be open:

# Internode Ports
7000    Cassandra inter-node cluster communication.
7001    Cassandra SSL inter-node cluster communication.
7199    Cassandra JMX monitoring port.
# Client Ports
9042    Cassandra client port (Native).
9160    Cassandra client port (Thrift).

Note: Some/Most guides tell you to disable swap, I think of swap as an acrobat’s safety net, it should never have to be put to use, but in need it exists. As such, I never disable it and I put a low swappiness (around 10). You can read more about it here and here.

 Cassandra configuration

Note: Cassandra has a LOT of settings, these are the ones you should always set if you are going live. Lots of them depend on hardware and/or workload. Maybe I’ll write a post about them in the near future. In the meantime, you can read about them here.

~$ vim /opt/cassandra/conf/cassandra.yaml

Edit the following fields:

cluster_name: <Whatever you would like to call it>
data_file_directories: /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog

saved_caches_directory: /var/lib/cassandra/saved_caches

# Assuming this is your first node, this should be reachable by other nodes
seeds: “<IP>”

# This is where you listen for intra node communication
listen_address: <IP>

# This is where you listen to incoming client connections
rpc_address: <IP>

endpoint_snitch: GossipingPropertyFileSnitch

Edit the snitch property file:

~$ vim  /opt/cassandra/conf/

Add the DC and the RACK the server is in. Ex:


Finally make sure your logs go to /var/log/cassandra:

~$ vim /opt/cassandra/conf/logback.xml

Start Cassandra

~$ service cassandra start

You should see no error here, wait a bit then:

~$ grep  JNA /var/log/cassandra/system.log
INFO  HH:MM:SS JNA mlockall successful

Then check the status of the ring:

~$ nodetool status
Datacenter: DC1
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Owns   Host ID                               Token                                    Rack
UN  140.59 KB  100.0%  5c3c697f-8bfd-4fb2-a081-7af1358b313f  0                                        RAC

Creating a keyspace a table and inserting some data:

~$ cqlsh xxx.yy.zz.ww

cqlsh- CREATE KEYSPACE sandbox WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', DC1 : 1};
Should give no errors
cqlsh- USE sandbox;
cqlsh:sandbox- CREATE TABLE data (id uuid, data text, PRIMARY KEY (id));
cqlsh:sandbox- INSERT INTO data (id, data) values (c37d661d-7e61-49ea-96a5-68c34e83db3a, 'testing');
cqlsh:sandbox- SELECT * FROM data;

id                                   | data
c37d661d-7e61-49ea-96a5-68c34e83db3a | testing

(1 rows)

And we are done, you can start using your Cassandra node!

Categories: DBA Blogs

Updated WebLogic Server 12.1.3 Developer Zip Distribution

Steve Button - Thu, 2015-01-29 18:50
We've just pushed out an update to the WebLogic Server 12.1.3 Developer Zip distribution containing the bug fixes from a recent PSU (patch set update).

This is great for developers since it maintains the high quality of the developer zip distribution and the convenience it provides - avoids reverting to the generic installer to then enable the application of patch set updates.  For development use only.

Download it from OTN:

Check out the readme for the list of bug fixes: