Skip navigation.

Feed aggregator

EM12c Upgrade Tasks

Arun Bavera - Mon, 2015-09-14 13:51
1.      Upgrade Primary OMR, OMS using Installer of   - 2 Hours

2.     Upgrade Primary Agent      - 6 Minutes
3.      Cleanup Agent
4.      Cleanup OMS
5.      Upgrade Secondary  OMS     - 30 Minutes
6.      Cleanup Agent
7.      Cleanup OMS
8.      No Monthly Agent/OMS Pacthes available yet for - Jul-14-2015 expected Jul-30-2015

9.  Install Latest JDK 1.6 (Note: 1944044.1) JDK Updated to
10.     Install Weblogic latest PSU (1470197.1)  
11.  Verify Load Balancer
12.  OMS Sizing

Categories: Development

Forcing Garbage Collection in JDK manually using JVisualVM

Arun Bavera - Mon, 2015-09-14 13:43
You might have seen many times heap crossing the limit and GC algorithm not working properly and keeping old object long time.
Even though it is not advised to force major GC manually if you come across a situation you can use the following method to clear the Heap.
Note. If the Heap size is huge more than 6GB doing major GC may cause application to wait for couple of seconds. Also, make sure you have enough system memory(RAM) to invoke the tool JVisualVM.
This is typical method in many corporates where X-Windows is not installed on their *NIX machines and app account is locked down for direct login.
1) Login as yourself into Linux/Unix machine and make sure your laptop/Desktop has X-emulator like xming running.
2) Note down the authorized X-keys:    xauth list
3) Login as app owner :     sudo su – oracle
4) Add the X-keys to oracle(App owner session)
xauth add <full string from xauth list from previous session>image

5) Do ps –ef|java , note down the JDK directory and go directly to JDK bin (/opt/app/oracle/jdk1.7.0_55/bin ) in this case we are using JDK7
6) Invoke  ./jvisualvm &
7) Choose the Weblogic PID and make sure in the Overview tab the server name is the one you are interested and Perform manual GC.
  Note: From JDK 7 onwards if your Heap size is more than 6GB then G1GC algorithm works in best possible ways. 
     Also refer:

Categories: Development

AZORA – Arizona Oracle User Group new location

Bobby Durrett's DBA Blog - Mon, 2015-09-14 13:35

The Arizona Oracle User Group has moved tomorrow’s meeting to Oracle’s offices on Camelback road:

Meetup link with meeting details


Categories: DBA Blogs

College Scorecard: An example from UMUC on fundamental flaw in the data

Michael Feldstein - Mon, 2015-09-14 13:33

By Phil HillMore Posts (365)

Russ Poulin at WCET has a handy summary of the new College Scorecard produced by the Education Department (ED) and the White House. This is a “first read” given the scorecard’s Friday release, but it is quite valuable since Russ participated on an ED Data Panel related to the now-abandoned Ratings System, the precursor to the Scorecard. Russ describes the good, the “not so good”, and the “are you kidding me?” elements. One area in particular highlighted by Russ is the usage of the “dreaded first-time, full-time completion rates”:

I knew this would be the case, but it really irks me. Under current data collected by the Department’s IPEDS surveys. They the group on which they base their “Graduation Rate” as: “Data are collected on the number of students entering the institution as full-time, first-time, degree/certificate-seeking undergraduate students in a particular year (cohort), by race/ethnicity and gender; the number completing their program within 150 percent of normal time to completion; the number that transfer to other institutions if transfer is part of the institution’s mission.”

This rate has long been a massive disservice to institutions focused on serving adults and community colleges. Here are some example rates: Empire State: 28%, Western Governors University: 26%, University of Maryland University College: 4%, Charter Oak Colleges: no data, and Excelsior College: no data.. The problem is that these numbers are based on incredibly small samples for these schools and do not reflect the progress of the bulk of the student body.

I won’t quote data for community colleges because they are all negatively impacted. They often serve a large number of students who are not “first-time” or define “success” in other ways.

I know that they are working on a fix to this problem in the future. Meanwhile, who atones for the damage this causes to these institution’s reputation. This data display rewards colleges who shy away from non-traditional or disadvantaged students. Is this what we want?

Russ is not the only one noting this problem. Consider this analysis from Friday [emphasis added]:

The most commonly referenced completion rates are those reported to IPEDS and are included on the College Scorecard (measuring completion within 150 percent, or six years, for predominantly four-year colleges; and within four years for predominantly two- or less-than-two-year schools). However, they rely on a school’s population of full-time students who are enrolled in college for the first-time. This is increasingly divergent from the profile of the typical college student, particularly at many two-year institutions and some four-year schools. For instance, Marylhurst University in Oregon, a four-year institution that has been recognized for serving adult students, reportedly had a 23 percent, six-year completion rate – namely because a very small subset of its students (just one percent) fall in the first- time, full-time cohort used to calculate completion rates. As with many schools that serve students who already have some college experience, this rate is, therefore, hardly representative of the school’s student body.

Who wrote this critical analysis, you ask? The Education Department in their own Policy Paper on the College Scorecard (p 17). Further down the page:

The Department has previously announced plans to work with colleges and universities to improve the graduation rates measured by the IPEDS system. Beginning in 2016, colleges will begin reporting completion rates for the other subsets of their students: first-time, part-time students; non-first-time, full-time students; and non-first-time, part-time students. In the meantime, by using data on federal financial aid recipients that the Department maintains in the National Student Loan Data System (NSLDS) for the purposes of distributing federal grants and loans, we constructed completion rates of all students receiving Title IV aid at each institution. For many institutions, Title IV completion rates are likely more representative of the student body than IPEDS completion rates – about 70 percent of all graduating postsecondary students receive federal Pell Grants and/or federal loans.

Given concerns about the quality of historical data, these NSLDS completion rates are provided on the technical page, rather than on the College Scorecard itself.

In other words, ED is fully aware of the problems of using IPEDS first-time full-time completion data, and they have plans to help improve the data, yet they chose to make fundamentally-flawed data a centerpiece of the College Scorecard.

Furthermore, the Policy Paper also addressed the need to understand transfer rates and not just graduation rates (p 18) [emphasis in original]:

The Administration also believes it is important that the College Scorecard address students who transfer to a higher degree program. Many students receive great value in attending a two-year institution first, and eventually transferring to a four-year college to obtain their bachelor’s degrees. In many cases, the transfer students do not formally complete the two-year program and so do not receive an associate degree prior to transferring. When done well, with articulation agreements that allow students to transfer their credits, this pathway can be an affordable and important way for students to receive four-year degrees. In particular, according to a recent report from the National Center of Education Statistics (NCES), students were best able to transfer credits when they moved from two-year to four-year institutions, compared with horizontal and reverse transfers.

To address this important issue, ED put the transfer data they have not on the consumer website but in the technical and data site (massive spreadsheets, data dictionaries, crosswalks all found here). Why did they not make this data easier to find? The answer is in a footnote:

We hope to be able to produce those figures for consumers after correcting for the same reporting limitations as exist for the completion rates.

To their credit, ED does address these limitations thoroughly in the Policy Paper and the Technical Paper, but very few people will read them. The end result is a consumer website that is quite misleading. Knowing all the problems of the data, this is what you see for UMUC.


Consider what prospective students will think seeing this page. UMUC sucks, I’m likely to never graduate.

UMUC points out in this document that less than 2% of their student body are first-time full-time, and that the real results paint a different picture.

UMUC report

UMUC report grad

Consider the harm done to prospective UMUC students by seeing the flawed, over-simplified ED College Scorecard data, and consider the harm done to UMUC as they have to play defense and explain why prospects should see a different situation. Given the estimate that non-traditional students – those who would not be covered at all in IPEDS graduation rates – comprise more than 70% of all students, you can see how UMUC is not alone. Community colleges face an even bigger problem with the lack of transfer rate reporting.

And this is how the ED is going to help consumers make informed choices?

Count me as in agreement with Russ in his conclusions:

The site is a good beginning at addressing the needs of the traditional student leaving high school and seeking a college. It leaves much to be desired for the non-traditional students who now comprise a very large portion of the college-seeking population.

I applaud the consumer-focused vision and hope that feedback continues to improve the site. I actually think this could be a fantastic service. I just worry that in the haste to get it out that we did not wait until we had the data to do it correctly.

The post College Scorecard: An example from UMUC on fundamental flaw in the data appeared first on e-Literate.

Join Oracle Service Cloud at OpenWorld 2015 to Talk Trends, Best Practices, Product Strategy, and Gain Business Value

Linda Fishman Hoyle - Mon, 2015-09-14 13:21

A Guest Post by Director Christine Skalkotos, Product Strategy Programs, Oracle Service Cloud (pictured left)

Oracle Service Cloud @ CX Central is returning to San Francisco, October 25-29, 2015!

Oracle Service Cloud is once again excited to join the OpenWorld 2015 customer experience (CX) activities and conversations happening in Moscone West on the second floor. The team has an engaging lineup of more than 20 sessions and demonstrations available for service professionals. You will have the opportunity to discuss pressing industry trends, examine solution best practices, and gain insights into upcoming product strategy to help drive continual business value.You also will get to hear from leading service innovators such as HQ Air Reserve Personnel Center, LinkedIn, and SiriusXM.

Visit Service―CX Central @ OpenWorld

All Oracle Service Cloud sessions will be hosted in Rooms 2006 and 2016 in Moscone West on the second floor. Explore the Oracle Service Cloud demo zone which is also located on the second floor. Details, including all session dates, times, and room numbers, are published at  Service―CX Central @ OpenWorld for your convenience.

What’s New and Different?

  • Sessions that explain the roadmaps for Oracle Service Cloud
  • Sessions showing how Oracle Service Cloud integrates with existing applications
  • Sessions led by partners sharing the latest insights on recent implementations

Guest Customers and Partner Appearances include:

Academy Sports+Outdoors, Dish Network, HQ Air Reserve Personnel Center, Kohls, KP OnCall, LinkedIn, Mazda, Overhead Door Corporation, Pella, Riverbed Technology, SiriusXM, SoftClouds LLC, TCS, and more!

Start the Experience with the Service Cloud General Session!

Oracle’s CIO, Mark Sunday, joins David Vap, GVP Product Development for Oracle Service Cloud, to ignite the CX-Service track at 1:00 – 2:15 p.m. on Monday, October 26 in Room 2006. Walk through Oracle’s product strategy and market trends impacting service professionals, while hearing best practices from innovative brands, like Academy Sports+Outdoors, Mazda, and our Oracle Service Cloud Partner Sponsor TCS, that are meeting today’s customer experience challenges. [GEN9837]

Roadmap and Product Conference Sessions

Oracle Service Cloud @ CX Central will be hosting more than 20 conference sessions. These sessions begin at 2:45 p.m. on Monday, October 26. These sessions are led by Oracle Service Cloud product management team members and highlight customer and partner case studies. Many sessions are aligned with our strategic engagements model called “Roadmap to Modern Customer Service." Here is a listing of sessions:

  • Modern Service for a Changing World: a customer panel featuring HQ Air Reserve Personnel Center, Kohls, LinkedIn, and SiriusXM [CON10020]
  • The Future of Customer Service. Are You Ready? [CON9884]
  • Get Ahead: Strategic Roadmap to Modern Customer Service [CON10325]
  • Tailoring the Agent Experience [HOL10509]
  • Getting the Most Out of Web Self-Service [HOL10510]
  • “Get Going” Sessions: Leading with Connected Customers
    • Get Going: Tear Down This Wall: How Web Self-Service and Communities Are Combining [CON9839]
    • Get Going: Improving Service Engagements with Chat and Cobrowse [CON9891]
    • Get Going: Proven Techniques for Right Channeling within an Online Service [CON9892]
    • Get Going: Make Your Customer Service a Differentiator with Policy Automation (featuring KP OnCall) [CON9896]
  • “Get Better” Sessions: Recognized for Service Quality & Innovation
    • Get Better: Oracle Service Cloud Customer Engagement Center Overview and Roadmap [CON9924]
    • Get Better: Cut Through the Complexity: Delivering Customer Service Excellence in the Engagement Center (featuring Pella and SiriusXM) [CON9925]
    • Get Better: Oracle Service Cloud Knowledge Management Overview and Roadmap [CON9836]
    • Get Better: Knowledge at the Heart of Service Makes Customer Service Hum (featuring Mazda and SoftClouds LLC) [CON9922]
    • Get Better: Raise the Bar: Empower Agents and Enable Change with Knowledge Centered Support (featuring Riverbed Technology) [CON9923]
    • Get Better: Accelerating Oracle Service Cloud and Siebel Integration [CON9887]
    • Get Better: Accelerating Oracle Service Cloud and Oracle E-Business Suite Integration (featuring Overhead Door Corporation) [CON9889]
  • “Get Ahead” Sessions: Differentiated and Leading with Personalized Service
    • Get Ahead: Field Service in the Age of Uber: Vision and Roadmap [CON9899]
    • Get Ahead: Transforming Field Operations: DISH Network and Oracle Field Service Cloud (featuring Dish Network) [CON9898]
    • Get Ahead: Oracle Service Cloud Integration Strategy: The Spectrum of Integrations [CON9843]
    • Get Ahead: Step into the Engine Room: Discover the Platform That Powers Modern Service  [CON9897]

Service Demo Zone

Benefit by engaging with Oracle Service Cloud product demonstrations led by members of the Oracle Service Cloud product management and sales consulting teams.

  • Get Going with Digital Service
  • Get Going with Policy Automation
  • Get Better with Contact Center
  • Get Better with Knowledge Management
  • Get Better with Siebel & EBS Integration
  • Get Ahead with Field Service Cloud
  • Get Ahead with Personalized Service
  • Get Ahead with Oracle Cloud Platform

Customer Events

Finally, a preview of Oracle Service Cloud at OpenWorld would not be complete without a mention of customer appreciation events:

  • Monday, October 26: Oracle Service Cloud Customer Appreciation Reception at Oracle OpenWorld, by invitation only―a chance to network with Oracle Service Cloud product management and peers
  • Tuesday, October 27: CX Central customer appreciation event; planning is in progress!
  • Wednesday, October 28: Oracle Appreciation Event at Treasure Island!

At a Glance

Visit Oracle OpenWorld for full details on speakers, conference sessions, exhibits, and entertainment!

How 1and1 failed me

Sean Hull - Mon, 2015-09-14 11:13
I manage this blog myself. Not just the content, but also the technology it runs on. The systems & servers are from a hosting company called And recently I had some serious problems. Join 31,000 others and follow Sean Hull on twitter @hullsean. The publishing platform wordpress, as a few versions out of date. … Continue reading How 1and1 failed me →

Report Carousel in APEX 5 UT

Dimitri Gielis - Mon, 2015-09-14 09:45
The Universal Theme in APEX 5.0 is full of nice things.
Did you already see the Carousel template for Regions? When you add a region to your page with a couple of sub-regions and you give the parent region the "Carousel Container" template it turns the regions into a carousel, so you can flip between regions.
I was asked to have the same functionality but than on dynamic content.So I decided to build a report template that would be shown as carousel. Here's the result:

You can see it in action at
I really like carousels :)
Here's how you can have this report template in your app:1) Create a new Report Template:

Make sure to select Named Column for the Template Type:

Add following HTML into the template at the given points:

That's it for the template.

Now you can create a new report on your page and give it the template you just created.
Here's the SQL Statement I used:

select PRODUCT_ID          as id,
       PRODUCT_NAME        as title,
       PRODUCT_DESCRIPTION as description,
       dbms_lob.getlength(PRODUCT_IMAGE) as image,
       'no-icon'           as icon,
       null                as link_url 

Note 1: that you have to use the same column aliases as you defined in the template.
Note 2: make sure you keep the real id of your image in the query too, as otherwise you'll get an error (no data found)

To make the carousel a bit nicer I added following CSS to the page, but you could add it to your own CSS file or in the custom css section of Theme Roller.

Note: the carousel can work with an icon or an image. If you want to see an icon you can use for example "fa-edit fa-4x". When using an image, define the icon as no-icon.

Eager for more Universal Theme tips and tricks? check-out our APEX 5.0 UI training in Birmingham on December 10th. :)

For easier copy/paste into your template, you find the source below:

 *** Before Rows ***  
<div class="t-Region t-Region--carousel t-Region--showCarouselControls t-Region--hiddenOverflow" id="R1" role="group" aria-labelledby="R1_heading">
<div class="t-Region-bodyWrap">
<div class="t-Region-body">
<div class="t-Region-carouselRegions">
*** Column Template ***
<div data-label="#TITLE#" id="SR_R#ID#">
<a href="#LINK_URL#">
<div class="t-HeroRegion " id="R#ID#">
<div class="t-HeroRegion-wrap">
<div class="t-HeroRegion-col t-HeroRegion-col--left">
<span class="t-HeroRegion-icon t-Icon #ICON#"></span>
<div class="t-HeroRegion-col t-HeroRegion-col--content">
<h2 class="t-HeroRegion-title">#TITLE#</h2>
<div class="t-HeroRegion-col t-HeroRegion-col--right"><div class="t-HeroRegion-form"></div><div class="t-HeroRegion-buttons"></div></div>
*** After Rows ***
*** Inline CSS ***
.t-HeroRegion-col.t-HeroRegion-col--left {
.t-HeroRegion {
border-bottom:0px solid #CCC;
.t-Region--carousel {
border: 1px solid #d6dfe6 !important;
.t-HeroRegion-col--left img {
max-height: 90px;
max-width: 130px;
.no-icon {
Categories: Development

Discovery and Monitor Oracle Database Appliance (#ODA) using #EM12C

DBASolved - Mon, 2015-09-14 09:40

A few months ago, I heard that Oracle was releasing a plug-in for the Oracle Database Appliance (ODA (Oh Dah). At first I couldn’t find anything on this plug-in, then I was able to find it in the Self-Update for Plug-ins (Extensibility -> Self Update -> Plug-ins).

After finding the plug-in, it needed to be deployed to the Oracle Management Server (OMS). Once deployed, it can be used to monitor the ODA; however, this plug-in is different from plug-ins like the Exadata, where you have a wizard to configure monitoring of the hardware associated with the engineered system. In order to use this plug-in, the two servers in the ODA have to have the EM agents installed on them. Here is a list of articles, by some great guys plus myself, that relate to installing agents in EM12C (if you don’t know how to do that already).

Tim Hall
Gavin Soorma
Gokhan Atil
Javier Ruiz

Once the agents are installed, then the plug-in has to be added to the agent. This is achived by pushing the plug-in from the same screen where the plug-in was deployed to the OMS. Only this time, it is deployed to the newly added agents (Extensibility -> Plug-Ins -> Deploy On -> Management Agent)

Once the plug-in is deployed to the new targets for the ODA servers; then the ODA can be added to OEM.

To add the ODA with the plug-in, it can now be done through the Add Targets Manually process. This step is a wizard that walks through adding the ODA; is done through the Add Targets Using Guided Process.


When starting the discovery, OEM will provide you a Discover Now button which initiates the wizard for discovering the ODA componenets such as ILOM and the servers.

When the wizard starts, it asks for an agent URL. This is the agent installed on the first node of the ODA. Then provide the host root login that is stored in Named Credentials or a new login.

The next step of the wizard, will provide a list of all the discovered targets in the ODA.

On the credential screen, the wizard asks for the root password for both the host and the ILOM. If the passwords are the same across the ODA there is a n option to use the password for both items being granted.

The Tag Cloud step is interesting. You really don’t have to put anything here; however, you can tag the ODA to help identify what is being added. There is a Create Tag button at the bottom if you want to create a tag. (I didn’t create one so I didn’t include a picture in this post).

Finally, the review step shows what is going to be added to OEM as the ODA. Once the targets have been promoted successfully, you will see a green flag in the Discovery Stataus block. At this point the ODA has been added to OEM.

Now that the ODA has been added to OEM, it can be viewed from Targets -> All Targets -> Target Type -> Engineered Systems -> Oracle Database Appliance System. From here, OEM takes you to the home page for the ODA.

From the ODA home page, it provides an overview of all the item going on with the ODA. Click around and have some fun reviewing what is happening with the ODA being monitored.


Filed under: OEM
Categories: DBA Blogs

Agilent Discusses the Path to Digital Experience Success

WebCenter Team - Mon, 2015-09-14 07:34

Path to Digital Experience Success Webcast

Oracle Corporation  The Path to Digital Experience Success Innovative Strategies for Maximizing Customer Engagement & Marketing Performance

Becoming a digital business is imperative for organizations to deliver the next wave of revenue growth, service excellence and brand loyalty. And the stakes are high — 94% of customers discontinue communications because of irrelevant messages and experiences.

Join this webcast to learn how Agilent Technologies has transformed audience engagement by connecting digital experiences with business outcomes. Learn how to:
  • Develop an end-to-end customer engagement strategy from an “outside-in” point of view
  • Deliver omni-channel experiences that are seamless, tailored and consistent across all channels, audiences and devices
  • Extend reach and orchestrate engagement across the customer journey
  • Enabling the partnership between IT and the business to quickly align objectives
Register now for this webcast.
Red Button Top Register Now Red Button Bottom Live Webcast Calendar September 16, 2015
10 AM PT / 1 PM ET Speakers: Michael Conant Michael Conant,
Senior Product and
Program Manager,
Agilent Technologies Chris Preston Chris Preston,
Sr. Director,
Digital Experience Strategy,
Oracle Integrated Cloud Applications and Platform Services Copyright © 2015, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement SEO100455922

Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

SoapUI: increase memory settings

Darwin IT - Mon, 2015-09-14 06:35
I have some testcases to run a complex of OSB services that processes documents in a content server.
Using a customization file I changed the endpoints of the content server's webservices to mock-services in SoapUI.

In these testcases I kick-off the OSB services, and in successive MockResponse-teststeps I try to catch the service-requests of OSB. This enables me to set assertions on the messages that OSB sends out to the content server, and thus validate the messages built in the OSB-proxies.

I found that one testcase can run correctly for one time, but running it the second time, or running a second testcase succesively, may fail.

Now it is quite important to have the successive MockResponse-teststeps started at the start of the previous teststep.

At the failing test step, OSB apparently sends a request that isn't caught by SoapUI or SoapUI responds with a HTTP-500 (Internal Server error).

Using JVisualVM, monitored the Heap of the JVM of SoapUI, and I found that just about the time the start/run of the failing teststep, there is an increase in heap. Now, since I found that having the set the Start Step property of each MockResponse-step is important, I figured it seems that timing is everything: also an increase of the Heap consumes time, and is presumably triggered by a major Garbage Collect, that halts the application for a brief moment.

So I wanted to increase the amount of heap. Since SoapUI is started with an .exe file (under Windows that is), the JVM properties (SoapUI is a Java application) are stored in a file. Under windows it can be found in "c:\Program Files\SmartBear\SoapUI-5.1.3\bin", depending on the version of SoapUI.
The file is called "SoapUI-5.1.3.vmoptions" and has the following contents:

-Dsoapui.home=C:\Program Files\SmartBear\SoapUI-5.1.3/bin
-Dsoapui.ext.libraries=C:\Program Files\SmartBear\SoapUI-5.1.3/bin/ext
-Dsoapui.ext.listeners=C:\Program Files\SmartBear\SoapUI-5.1.3/bin/listeners
-Dsoapui.ext.actions=C:\Program Files\SmartBear\SoapUI-5.1.3/bin/actions
-Dwsi.dir=C:\Program Files\SmartBear\SoapUI-5.1.3/wsi-test-tools
-Djava.library.path=C:\Program Files\SmartBear\SoapUI-5.1.3/bin

You'll need to change the security properties to enable yourself to edit/save it. Then change the -Xms and -Xmx properties according to your needs. The defaults are quite "cautious": -Xms200m and -Xms1000m.

Using JVisualVM you can see that with a restart the new memory settings are picked up and in my case that no increase in Heap is needed during the tests.

Do you ‘Glow in the Dark’?

Duncan Davies - Mon, 2015-09-14 06:00

I’m in awe of many people. I’m lucky to have met and worked with some truly smart and outstanding individuals. (I just wish I wasn’t so reserved and was able to tell them!)

If I was asked to pick a handful of the most talented people however, Seth Godin would undoubtedly be up there.

sethI’ve not met Seth in real life (although I had a near miss at OpenWorld 5 or 6 years back) but I’ve followed his work for a decade at least. He writes daily posts on his blog – most of them succinct and quick to read – which are always really insightful.

My all-time favourite post from Seth was from just the other day. I’m reposting it – not because I’m stealing his work, but because it increases the chances of readers of this blog seeing it – and going to his site and subscribing, adding it to your RSS reader etc.

Glow in the dark

Some people are able to reflect the light that lands on them, to take directions or assets or energy and focus it where it needs to be focused. This is a really valuable skill.

Even more valuable, though, is the person who glows in the dark. Not reflecting energy, but creating it. Not redirecting urgencies but generating them. The glow in the dark colleague is able to restart momentum, even when everyone else is ready to give up.

At the other end of the spectrum (ahem) is the black hole. All the energy and all the urgency merely disappears.

Your glow in the dark colleague knows that recharging is eventually necessary, but for now, it’s okay that there’s not a lot of light. The glow is enough.

I wish I was able to write half as beautifully as this. Please go to his site and subscribe. I’m sure we can all identify some people who can reflect the light, some who are occasionally black holes, and – if you’re lucky – have a glow in the dark colleague. If you need further convincing of Seth’s genius, the Interim Strategy will probably resonate too.

Oracle Access Manager: java.lang.OutOfMemoryError

Online Apps DBA - Mon, 2015-09-14 02:03


This post is related to Oracle Admin server issue from our Oracle Access Manager Training (next batch starts on 20th Sept, 2015) where we also cover High Availability & Disaster recovery  agenda here

One of the trainee from our previous batch encountered issue while accessing the oamconsole URL:   http//<Hostname>:<Admin Port>/oamconsole.

To find the root cause check the Admin Server log file located at $DOMAIN_HOME/servers/AdminServer/logs and in our case, it was showing below error messages:

<Error> <HTTP> <hostname> <AdminServer> <[ACTIVE] ExecuteThread: ’14’ for queue: ‘weblogic.kernel.Default (self-tuning)’> <<WLS Kernel>> <> <0000wJ81aCT7y0G6yzYfMG000024000F0n> <1424169946099> <BEA-101020> <[ServletContext@13260635[app:em module:/em path:/em spec-version:2.5]] Servlet failed with Exception

java.lang.OutOfMemoryError: Java heap space

at java.lang.reflect.Array.newArray(Native Method)

at java.lang.reflect.Array.newInstance(

at oracle.jdbc.driver.BufferCache.get(

at oracle.jdbc.driver.PhysicalConnection.getCharBuffer(

at oracle.jdbc.driver.OracleStatement.prepareAccessors(

at oracle.jdbc.driver.T4CTTIdcb.receiveCommon(

at oracle.jdbc.driver.T4CTTIdcb.receive(

at oracle.jdbc.driver.T4C8Oall.readDCB(

at oracle.jdbc.driver.T4CTTIfun.receive(

at oracle.jdbc.driver.T4CTTIfun.doRPC(

at oracle.jdbc.driver.T4C8Oall.doOALL(

at oracle.jdbc.driver.T4CPreparedStatement.doOall8(

at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPrepared

Root Cause:

Since, Admin Console is deployed on Admin Server and is accessible via Admin Port /oamconsole ( http//<Hostname>:<Admin Port>/oamconsole) 


The jvm size at the moment  was 500 MB for Admin Server, it should be in between 1Gb to 2Gb , because we access OAM console on top of admin server.


Temporary Fix:

Bounce the Admin server.

Permanent Fix:

1. Change the Admin server jvm settings as shown below in script located under $DOMAIN_HOME/bin.

if [ “${SERVER_NAME}” == “AdminServer” ] ; then

      MEM_ARGS=”-Xms2048m -Xmx2048m -XX:PermSize=128m -XX:MaxPermSize=  


      export MEM_ARGS


2. Bounce the Admin server and you should be able to access the OAM console.

Note: You can see the modified jvm settings in Admin server Log.

If you want to learn more issues like above or wish to discuss challenges you are hitting in Oracle Access Manager Implementation, register for our Oracle Access Manager Training.

We are so confident on quality and value of our trainings that We provide 100% Money back guarantee so in unlikely case of you being not happy after 2 sessions, just drop us a mail before third session and We’ll refund FULL money.

Did you subscribe to our YouTube Channel (293 already subscribed) ?

The post Oracle Access Manager: java.lang.OutOfMemoryError appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Oracle Java Cloud Service now available in EMEA datacenters!

Great news for Oracle customers and partners who have the requirement to host their cloud based solutions within the EU - Java Cloud Service is now available in the Oracle EMEA datacenters! This...

We share our skills to maximize your revenue!
Categories: DBA Blogs

DataStax and Cassandra update

DBMS2 - Mon, 2015-09-14 00:02

MongoDB isn’t the only company I reached out to recently for an update. Another is DataStax. I chatted mainly with Patrick McFadin, somebody with whom I’ve had strong consulting relationships at a user and vendor both. But Rachel Pedreschi contributed the marvelous phrase “twinkling dashboard”.

It seems fair to say that in most cases:

  • Cassandra is adopted for operational applications, specifically ones with requirements for extreme uptime and/or extreme write speed. (Of course, it should also be the case that NoSQL data structures are a good fit.)
  • Spark, including SparkSQL, and Solr are seen primarily as ways to navigate or analyze the resulting data.

Those generalities, in my opinion, make good technical sense. Even so, there are some edge cases or counterexamples, such as:

  • DataStax trumpets British Gas‘ plans collecting a lot of sensor data and immediately offering it up for analysis.*
  • Safeway uses Cassandra for a mobile part of its loyalty program, scoring customers and pushing coupons at them.
  • A large title insurance company uses Cassandra-plus-Solr to manage a whole lot of documents.

*And so a gas company is doing lightweight analysis on boiler temperatures, which it regards as hot data. :)

While most of the specifics are different, I’d say similar things about MongoDB, Cassandra, or any other NoSQL DBMS that comes to mind:

  • You can get any kind of data into them very fast; indeed, that’s a central part of what they were designed for.
  • In the general case, getting it back out for low-latency analytics is problematic …
  • … but there’s an increasing list of exceptions.

For DataStax Enterprise, exceptions start:

  • Formally, you can do almost anything in at least one of Solr or Spark/SparkSQL. So if volumes are low enough, you’re fine. In particular, Spark offers the potential to do many things at in-memory speeds.
  • Between Spark, the new functions, and general scripting, there are several ways to do low-latency aggregations. This can lead to “twinkling dashboards”.*
  • DataStax is alert to the need to stream data into Cassandra.
    • That’s central to the NoSQL expectation of ingesting internet data very quickly.
    • Kafka, Storm and Spark Streaming all seem to be in the mix.
  • Solr over Cassandra has a searchable RAM buffer, which can give the effect of real-time text indexing within a second or so of ingest.

*As much as I love the “twinkling dashboard” term — it reminds me of my stock analyst days — it does raise some concerns. In many use cases, human real-time BI should be closely integrated with the more historical kind.

DataStax Enterprise:

  • Is based on Cassandra 2.1.
  • Will probably never include Cassandra 2.2, waiting instead for …
  • ….Cassandra 3.0, which will feature a storage engine rewrite …
  • … and will surely include Cassandra 2.2 features of note.

This connects to what I said previously in that Cassandra 2.2 adds some analytic features, specifically in the area of user-defined functions. Notes on Cassandra 2.2 UDFs include:

  • These are functions — not libraries, a programming language, or anything like that.
  • The “user-defined” moniker notwithstanding, the capability has been used to implement COUNT, SUM, AVG, MAX and so on.
  • You are meant to run user-defined functions on data in a single Cassandra partition; run them across partitions at your own performance risk.

And finally, some general tidbits:

  • A while ago, Apple said it had >75,000 Cassandra nodes. The figure is surely bigger now.
  • There are at least several other petabyte range Cassandra installations, and several more half-petabyte ones.
  • Netflix is not one of those. Instead, it has many 10s of smaller Cassandra clusters.
  • There are Cassandra users with >1 million reads+writes per second.

Finally a couple of random notes:

  • One of the text search use cases for Solr/Cassandra is to — in one query — get at information that originated in multiple places, e.g. for reasons of time period or geography. (I hear this about text search across lots of database technologies, relational and non-relational alike.)
  • As big a change as Cassandra 3.0 will be, it will not require that you take down your applications for an upgrade. That hasn’t been necessary since Cassandra 0.7.
Categories: Other

ERP : Definition,Terminology,Acronyms

OracleApps Epicenter - Sun, 2015-09-13 21:55
Enterprise Resource Planning aka ERP– Definition “Enterprise Resource Planning (ERP) is defined business strategies and enabling software that integrate manufacturing, financial and distribution functions to dynamically balance and optimize enterprise resources. • ERP integrates all departments and functions acrossan enterprise onto a single computing system that can serve all those different departments' particular needs. • […]
Categories: APPS Blogs

Scaling Export and Import Tables Residing in Different Schemas 10gR2

Michael Dinh - Sun, 2015-09-13 08:40

Our team was tasked to unpartion all partition tables.

Import: Release has PARTITION_OPTIONS=DEPARTITION but we are on Release

The first step was to find all schemas with partition tables.

select owner, table_name, partitioning_type, subpartitioning_type, partition_count, status 
from dba_part_tables where owner not in ('SYS','SYSTEM') 
and (owner,table_name) not in (
 select owner,mview_name table_name 
 from dba_mviews 
 where owner not in ('SYS','SYSTEM') 
order by 1,2

NOTE: SQL is not 100% fail proof as we ran into a scenario where the table and materialized view had the same name.

Export the metatadata for table from multiple schemas failed:

UDE-00012: table mode exports only allow objects from one schema

Export/Import DataPump Parameter TABLES – How to Export and Import Tables Residing in Different Schemas (Doc ID 277905.1)

Solution 1: Use combination of SCHEMAS and INCLUDE parameters.
File: expdp_tabs.par 
DIRECTORY = my_dir  
DUMPFILE  = expdp_tabs.dmp  
LOGFILE   = expdp_tabs.log  
SCHEMAS   = scott,hr,oe   

Great solution but not scalable.

Dig to find – How to export tables from multiple schemas with Oracle Data Pump in Oracle 10g and 11g databases

I will let you read the post; however, the following was the key for me.

[oracle@srvdb01]:/transfer/uural/datapumpdemo > expdp '"/ as sysdba"' directory=UURAL_DATAPUMPDEMO 
dumpfile=u0001-u0002_tables logfile=u0001-u0002_tables schemas=U0001,U0002 
INCLUDE=TABLE:\"IN \(SELECT table_name FROM u0001.expdp_tables\)\"                                            

Database has 22 partition tables.

*** List partition tables, exluding MVIEW ***
22 rows selected.

There are 11 schemas with partition tables.

*** List partition tables count by owner, exluding MVIEW ***
11 rows selected.

Partition table SYSTEM_QUEUE resides in 7 different schemas and ACCOUNT_OBJECT_TRANSACTIONS resides in 2 different schemas.

*** List same table name across owner, exluding MVIEW ***
------------------------------ ----------

Create tables to use for export.

create table OWNER01.expdp_tables (table_name varchar2(30))
insert into OWNER01.expdp_tables
select DISTINCT table_name
from dba_part_tables
where owner not in ('SYS','SYSTEM')
and (owner,table_name) not in (
  select owner,mview_name table_name from dba_mviews where owner not in ('SYS','SYSTEM')

Create export parameter.

$ cat expdp_schema_TEST.par 
userid="/ as sysdba"
INCLUDE=TABLE:"IN (SELECT table_name FROM OWNER01.expdp_tables)"

Perform export.

$ expdp parfile=expdp_schema_TEST.par 

Export: Release - 64bit Production on Thursday, 10 September, 2015 14:04:34

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_SCHEMA_01":  parfile=expdp_schema_TEST.par 
Processing object type SCHEMA_EXPORT/TABLE/TABLE
     Completed 22 TABLE objects in 10 seconds
     Completed 116 OBJECT_GRANT objects in 0 seconds
     Completed 85 INDEX objects in 20 seconds
     Completed 31 CONSTRAINT objects in 5 seconds
     Completed 85 INDEX_STATISTICS objects in 1 seconds
     Completed 2 REF_CONSTRAINT objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
     Completed 29 TRIGGER objects in 30 seconds
     Completed 22 TABLE_STATISTICS objects in 2 seconds
     Completed 1 POST_TABLE_ACTION objects in 0 seconds
Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at 14:05:46

Create import parameter.

$ cat impdp_sqlfile_TEST.par 
userid="/ as sysdba"

Perform import.

$ impdp parfile=impdp_sqlfile_TEST.par 

Import: Release - 64bit Production on Thursday, 10 September, 2015 14:07:36

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_SQL_FILE_FULL_01":  parfile=impdp_sqlfile_TEST.par 
Processing object type SCHEMA_EXPORT/TABLE/TABLE
     Completed 22 TABLE objects in 2 seconds
     Completed 116 OBJECT_GRANT objects in 0 seconds
     Completed 85 INDEX objects in 8 seconds
     Completed 31 CONSTRAINT objects in 0 seconds
     Completed 2 REF_CONSTRAINT objects in 0 seconds
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
     Completed 29 TRIGGER objects in 0 seconds
     Completed 1 POST_TABLE_ACTION objects in 0 seconds
Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 14:07:48


$ grep -c "CREATE TABLE" /u02/oracle/exp/create_TEST.sql|sort


Hemant K Chitale - Sun, 2015-09-13 05:49
There are two different VALIDATE commands for Backups.   (These are different from RESTORE VALIDATE which I'd blogged about earlier in 10.2 here)

The first is BACKUP VALIDATE  which is useful to validate Datafiles to check for corruption.

The second is VALIDATE which can be used to check BackupSets.  (Although it, too, can be run against the DATABASE)

Here I use the first  form to check datafiles without actually creating a BackupSet :


Starting backup at 13-SEP-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=55 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
input datafile file number=00001 name=/home/oracle/app/oracle/oradata/orcl/system01.dbf
input datafile file number=00004 name=/home/oracle/app/oracle/oradata/orcl/users01.dbf
input datafile file number=00003 name=/home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bwk35c9q_.dbf
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bwk35cd7_.dbf
input datafile file number=00008 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bwk35cgr_.dbf
input datafile file number=00009 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bwk35cht_.dbf
input datafile file number=00011 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bwk35cj2_.dbf
input datafile file number=00005 name=/home/oracle/app/oracle/oradata/orcl/example01.dbf
input datafile file number=00010 name=/home/oracle/app/oracle/oradata/orcl/APEX_2614203650434107.dbf
channel ORA_DISK_1: backup set complete, elapsed time: 00:26:18
List of Datafiles
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
1 OK 0 14288 107649 14218763
File Name: /home/oracle/app/oracle/oradata/orcl/system01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 73392
Index 0 16290
Other 0 3678

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
2 OK 0 15266 161256 14218883
File Name: /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 40232
Index 0 22741
Other 0 82913

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
3 OK 0 1 21769 14217218
File Name: /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 0
Index 0 0
Other 0 21759

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
4 OK 0 5349 35154 14211780
File Name: /home/oracle/app/oracle/oradata/orcl/users01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 10430
Index 0 2188
Other 0 17073

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
5 OK 0 1580 10500 5843748
File Name: /home/oracle/app/oracle/oradata/orcl/example01.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 3947
Index 0 1110
Other 0 3859

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
6 OK 0 2 12801 14126777
File Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bwk35c9q_.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 6730
Index 0 0
Other 0 6068

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
7 OK 0 2 12801 14085691
File Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bwk35cd7_.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 6738
Index 0 0
Other 0 6060

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
8 OK 0 2 12801 14126777
File Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bwk35cgr_.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 6738
Index 0 0
Other 0 6060

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
9 OK 0 2 12801 14085691
File Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bwk35cht_.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 6739
Index 0 0
Other 0 6059

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
10 OK 0 277 896 13902238
File Name: /home/oracle/app/oracle/oradata/orcl/APEX_2614203650434107.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 154
Index 0 92
Other 0 373

File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
11 OK 0 371 12801 14126777
File Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bwk35cj2_.dbf
Block Type Blocks Failing Blocks Processed
---------- -------------- ----------------
Data 0 7263
Index 0 0
Other 0 5166

channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:06
List of Control File and SPFILE
File Type Status Blocks Failing Blocks Examined
------------ ------ -------------- ---------------
Control File OK 0 628
Finished backup at 13-SEP-15

RMAN> list backup completed after "trunc(sysdate)";

specification does not match any backup in the repository

RMAN> exit

Recovery Manager complete.
[oracle@localhost ~]$ sqlplus

SQL*Plus: Release Production on Sun Sep 13 19:32:43 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select * from v$database_block_corruption;

no rows selected


The BACKUP VALIDATE command doesn't actually create a BackupSet. It simulates running a backup but does the additional job of checking the database blocks for corruption. The CHECK LOGICAL additionally checks for Logical Corruption. The SQL query on V$DATABASE_BLOCK_CORRUPTION can be used after RMAN completes execution as it would be populated with information about the blocks found corrupt.

On the other hand, the VALIDATE command can be used to check BackupSets.

RMAN> validate backupset 278;

Starting validate at 13-SEP-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=67 device type=DISK
channel ORA_DISK_1: starting validation of datafile backup set
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_09_06/o1_mf_nnndf_TAG20150906T212547_byrhpx32_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_09_06/o1_mf_nnndf_TAG20150906T212547_byrhpx32_.bkp tag=TAG20150906T212547
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:06:57
Finished validate at 13-SEP-15


SYS>select * from v$backup_corruption;

no rows selected


As with the BACKUP VALIDATE not actually creating a BackupSet, the VALIDATE BACKUPSET doesn't actually restore a BackupSet (one or more BackupPieces from it) --- thus the messages "restored backup piece 1" is misleading.


Categories: DBA Blogs

YouTube Cameos : My Channel Needs You!

Tim Hall - Sun, 2015-09-13 01:40

I’ve spent the last couple of months uploading videos to my YouTube channel.

At the start of each technical video, I introduce myself by saying something like, “Hi. It’s Tim from oracle-base”, and I use a video clip of someone from the Oracle community saying, “.com”,  to finish off the website name. I then put links to their blog, twitter, website etc in the description box. It’s just something fun and stupid to lighten the tone of the videos and to give a shout out to people in the community. :)

If you take a look at the clips, you’ll see they vary a lot. Some are simple and straight, just filmed on a webcam or phone. Others are a little more elaborate, like the one filmed under water. Some come with some funny outtakes I put at the end of the video. :)

Here’s a montage of all the clips I’ve used so far.

If you want to be included in one of the videos, send a clip of yourself saying “.com” to me (tim (at) along with your blog and twitter URLs and I’ll include it in a future clip.

I don’t mind you using some casual company branding, like wearing the t-shirt, but this is really about community, so don’t send me a McDonalds advert! :) Any user group clips, like the one I got from Auckland are great too.

I try to use them on a first-come-first-served basis, so get in early before I start gathering clips at OpenWorld. :)



Update: Whoops! I missed out my crazy uncle Martin Widlake. You can see his clip here.

YouTube Cameos : My Channel Needs You! was first posted on September 13, 2015 at 8:40 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Bigfoot vs UFO analytics

Nilesh Jethwa - Sat, 2015-09-12 20:29

Bigfoot and UFO remain elusive but know their ways to make news from time to time.

Read more at: