Feed aggregator

New book

alt.oracle - Sat, 2013-03-16 14:46
Just a quick announcement that my second book is available from Packt Publishing.  OCA Oracle Database 11g: Database Administration I: A Real-World Certification Guide (again with the long title) is designed to be a different kind of certification guide.  Generally, it seems to me that publishers of Oracle certification guides assume that the only people who want to become certified are those with a certain level of experience, like a working DBA with several years on the job.  So, these guides make a lot of assumptions about the reader.  They end up being more about a lot of facts for the test rather than a cohesive learning experience.  My book attempts to target to a different kind of reader.  I've observed in the last several years that many people from non-database backgrounds are setting out to get their OCA or OCP certifications.  These folks don't necessarily bring a lot of knowledge or experience to this attempt, just a strong drive to learn.  My two books are designed to start from the beginning, then take the reader through all of the subjects needed for the certification test.  They're designed to be read straight through, completing the examples along the way.  In a sense, I'm attempting to recreate the experience of one of my Oracle classes in book form. 




You'll find the book at these fine sellers of books.

Packt Publishing
Amazon
Barnes and Noble


Categories: DBA Blogs

Getting it Right: 100KM, Team of 4 and 48 Hours

TalentedApps - Thu, 2013-03-14 23:50

It’s about an endeavor undertaken by our team of four people to raise funds for charity and to walk 100KM within 48 hours to meet the challenge set by Oxfam Trailwalker.  This post highlights our journey, the outcome and re-emphasizes some well-known facts.

We started with goal setting; success was the obvious goal so success criteria were defined at the start in consultation with all stakeholders. Key Success Indicators (KSIs) were to raise funds to qualify for the event (i.e. 50K INR) and to complete 100KM walk within 48 hours with all four members. We did identify stretch goals at the initiation phase itself and those were to raise funds of 150K+ INR for charity and to complete 100KM walk within 40 hours with all four members.

Getting it RightPlanning for the event went through a progressive elaboration process. As a team, we had to cross nine check points to register the entry and exit of the full team. Being a team building exercise, it was required that the team of four, walk together, supporting each other, fastest member walking with the slowest member of the team and completing the event as a team. As activities (aka check points) were already identified and sequenced, we had estimated duration for each activity to develop time management schedule in accordance with our team goal.

Communication among team members was planned thoroughly. Similarly, we planned how to communicate with stakeholders (family members, well-wishers, friends who donated for the cause etc) before and during the event. We performed SWOT analysis for the risks and prepared risk response strategy accordingly. We planned and conducted procurement as per the team needs for the event.

Finally on the D-Day, we first timers were at the event venue with almost a month of preparation. We started almost 10 minutes late from the starting point for 100KM walk of energy, determination and courage. We arrived at finish point exactly 39 hours and 38 seconds after the event starting time. It might not be an exceptional achievement from an outsider’s point of view but as our team could achieve predefined KSIs; this endeavor was a success for us.

It was a fun-filled memorable walk where confrontation was used as a technique to overcome difference of opinions and group decision-making was practiced for team decisions.

Four takeaway from this endeavor which are also keys for a successful project management are:

  • Success criteria must be defined at the beginning in consultation with all stakeholders.
  • Communication breeds success. A well-planned communication strategy is vital for project’s success.
  • Change is inevitable. You need to foresee challenges, risks and always need to have a change management plan in place.
  • Working together works. Remember the best team doesn’t win as often as the team that gets along best.

Automatic Shared Memory Management problem ?

Bas Klaassen - Thu, 2013-03-14 05:30
From time to time one of out 10g databases (10.2.0.5) seems to 'hang' Our monitoring shows a 'time out' on different checks and when trying to connect using sql, the sql session is hanging. No connection is possible. A few days ago, something like this happened again. Instead of bouncing the database, I decided to look for clues to find out why the database was 'hanging'. The server itself did Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com1
Categories: APPS Blogs

OWB - Compressing Files in Parallel using Java Activity

Antonio Romero - Wed, 2013-03-13 12:36

Yesterday I posted a user function for compressing/decompressing files using parallel processes in ODI. The same code you can pick up and use from an OWB process flow. Invoking the java function from within a Java activity from within the flow.

The JAR used in the example below can be downloaded here, from the process flow OWB invokes the main method within the ZipFile class for example - passing the parameters to the function for the input, output directories and also the number of threads. The parameters are passed as a string in OWB, each parameter is wrapped in ?, so we have a string like ?param1?param2?param3? and so on. In the example I pass the input directory d:\inputlogs as the first parameter and d:\outputzips as the second, the number of processes used is 4 - I have escaped my backslash in order to get this to work on Windows.

 The classpath has the JAR file with the class compiled in it and the classpath value can be specified specified on the activity, carefully escaping the path if on windows.

Then you can define the actual class to use;

That's it, pretty easy. The return value from the method will use the exit code from your java method - normally 0 is failure and other values are error (so if you exit the java using a specific error code value you can return this code into a variable in OWB or perform a complex transition condition). Any standard output/error is also capture from within the OWB activity log in the UI, for example below you can see an exception that was thrown and also messages output to the standard output/error;

 That's a quick insight to the java activity in OWB.

Connecting to Oracle Database Even if Background Processes are Killed

Asif Momen - Wed, 2013-03-13 06:28
Yesterday, I received an email update from MOS Hot Topics Email alert regarding a knowledge article which discusses how to connect to an Oracle database whose background processes are killed.

I bet every DBA must have encountered this situation at least once. When I am in this situation, I normally use "shutdown abort" to stop the database and then proceed with normal startup. 

After receiving the email, I thought of reproducing the same. My database (TGTDB) is 11.2.0.3 running on RHEL-5.5. The goal is to kill all Oracle background process and try to connect to the database.

Of course you don't want to test this in your production databases. 

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE    11.2.0.3.0      Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production

SQL> 


Below is the list of background processes for my test database "TGTDB":



[oracle@ogg2 ~]$ ps -ef|grep TGTDB
oracle    8249     1  0 01:35 ?        00:00:00 ora_pmon_TGTDB
oracle    8251     1  0 01:35 ?        00:00:00 ora_psp0_TGTDB
oracle    8253     1  0 01:35 ?        00:00:00 ora_vktm_TGTDB
oracle    8257     1  0 01:35 ?        00:00:00 ora_gen0_TGTDB
oracle    8259     1  0 01:35 ?        00:00:00 ora_diag_TGTDB
oracle    8261     1  0 01:35 ?        00:00:00 ora_dbrm_TGTDB
oracle    8263     1  0 01:35 ?        00:00:00 ora_dia0_TGTDB
oracle    8265     1  6 01:35 ?        00:00:02 ora_mman_TGTDB
oracle    8267     1  0 01:35 ?        00:00:00 ora_dbw0_TGTDB
oracle    8269     1  1 01:35 ?        00:00:00 ora_lgwr_TGTDB
oracle    8271     1  0 01:36 ?        00:00:00 ora_ckpt_TGTDB
oracle    8273     1  0 01:36 ?        00:00:00 ora_smon_TGTDB
oracle    8275     1  0 01:36 ?        00:00:00 ora_reco_TGTDB
oracle    8277     1  1 01:36 ?        00:00:00 ora_mmon_TGTDB
oracle    8279     1  0 01:36 ?        00:00:00 ora_mmnl_TGTDB
oracle    8281     1  0 01:36 ?        00:00:00 ora_d000_TGTDB
oracle    8283     1  0 01:36 ?        00:00:00 ora_s000_TGTDB
oracle    8319     1  0 01:36 ?        00:00:00 ora_p000_TGTDB
oracle    8321     1  0 01:36 ?        00:00:00 ora_p001_TGTDB
oracle    8333     1  0 01:36 ?        00:00:00 ora_arc0_TGTDB
oracle    8344     1  1 01:36 ?        00:00:00 ora_arc1_TGTDB
oracle    8346     1  0 01:36 ?        00:00:00 ora_arc2_TGTDB
oracle    8348     1  0 01:36 ?        00:00:00 ora_arc3_TGTDB
oracle    8351     1  0 01:36 ?        00:00:00 ora_qmnc_TGTDB
oracle    8366     1  0 01:36 ?        00:00:00 ora_cjq0_TGTDB
oracle    8368     1  0 01:36 ?        00:00:00 ora_vkrm_TGTDB
oracle    8370     1  0 01:36 ?        00:00:00 ora_j000_TGTDB
oracle    8376     1  0 01:36 ?        00:00:00 ora_q000_TGTDB
oracle    8378     1  0 01:36 ?        00:00:00 ora_q001_TGTDB
oracle    8402  4494  0 01:36 pts/1    00:00:00 grep TGTDB
[oracle@ogg2 ~]$ 


Let us kill all these processes at once as shown below: 


[oracle@ogg2 ~]$ kill -9 `ps -ef|grep TGTDB | awk '{print ($2)}'`
bash: kill: (8476) - No such process
[oracle@ogg2 ~]$ 

Make sure no processes are running for our database:

[oracle@ogg2 ~]$ ps -ef|grep TGTDB
oracle    8520  4494  0 01:37 pts/1    00:00:00 grep TGTDB
[oracle@ogg2 ~]$ 


Now, try to connect to the database using SQL*Plus:


[oracle@ogg2 ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Wed Mar 13 01:38:12 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> 

Voila, I am connected. Not only you get connected to the database but you can query V$*, DBA* and other application schema views/tables. Let's give a try: 


SQL> select name from v$database;

NAME
---------
TGTDB

SQL> select name from v$tablespace;

NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
USERS
TEMP
TEST_TS

6 rows selected.

SQL> 
SQL> select count(*) from dba_tables;

  COUNT(*)
----------
      2787

SQL> 
SQL> select count(*) from test.emp;

  COUNT(*)
----------
      3333

SQL> 


Let us try to update a record. 


SQL> 
SQL> update test.emp  set ename = 'test' where eno = 2;

1 row updated.

SQL>        

Wow, one record was updated. But when you try to commit/rollback, the instance gets terminated. And it makes sense as the background processes responsible for carrying out the change have all died.


SQL> 
SQL> commit;
commit
     *
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 8917
Session ID: 87 Serial number: 7


SQL> 

Following is the error message recorded in the database alert log:



Wed Mar 13 01:41:44 2013
USER (ospid: 8917): terminating the instance due to error 472
Instance terminated by USER, pid = 8917



The user (client) session was able to retrieve data from the database as the shared memory was still available and the client session does not need background processes for this task.

Below mentioned MOS article discusses on how to identify and kill the shared memory segment(s) allocated to "oracle" user through UNIX/Linux commands. 

References:

  1. Successfully Connect to Database Even if Background Processes are Killed [ID 166409.1]

#kscope13

Chet Justice - Tue, 2013-03-12 22:35
Back in September, I was asked, and agreed, to become to Content Chair for "The Traditional" track at Kscope 13. Like I mentioned there, I had been involved for the past couple of years and it seemed like a natural fit. Plus, I get to play with some really fun people. If you are ready to take advantage of Early Bird Registration, go here. (save $300)

Over the past few weeks we've finalized (mostly) the Sunday Symposium schedule. We're currently working on finalizing Hands-on-Labs (HOL).

Beginning last year, we've had the Oracle product teams running the Sunday Symposia. This gives them an opportunity to showcase their wares and (hopefully) provide a bit of a road map for the future of said wares. This year, we have three symposia: APEX, ADF and Fusion Development and The Database and Developer's Toolbox.

ADF and Fusion Development

- Oracle Development Tools – Where are We and What’s Next - Bill Patakay, Oracle
- How to Get Started with Oracle ADF – What Resources are Out There? - Shay Shmeltzer and Lynn Munsinger, Oracle
- The Cloud and What it Means to Oracle ADF and Java Developers - Dana Singleterry, Oracle
- Going Mobile – What to Consider Before Starting a Mobile Project - Joe Huang, Oracle
- Understanding Fusion Middleware and ADF Integration - Frederic Desbiens, Lynn Munsinger, and Shay Shmeltzer, Oracle
- Open Q&A with the ADF Product Management

I love that they are opening up the floor to questions from their users. I wish more product teams would do that.

Application Express

- Oracle Database Tools - Mike Hichwa, Oracle
- Technology for the Database Cloud - Rick Greenwald, Oracle
- Developing Great User Interfaces with Application Express - Shakeeb Rahman, Oracle
- How Do We Build the APEX Builder? - Vlad Uvarov, Oracle
- How to Fully Utilize RESTful Web Services with Application Express - John Snyders, Oracle
- Update from APEX Development - Joel Kallman, Oracle

(If you see Joel Kallman out and about, make sure you you mispronounce APEX). This is a fantastic group of people (minus Joel of course). Not mentioned above is the affable David Peake who helps put all this together. The community surrounding APEX is second-to-none.

Finally, The Database and Developer's Toolkit. I'm partial to this one because I've been involved in the database track for the past couple of years. Like last year, this one is being put together by Kris Rice of Oracle. There are no session or abstract details for this one as it will be based mainly on the upcoming 12c release of the database. However, we do have the list of speakers lined up. If you could only come for one day of this conference, Sunday would be the day and this symposium would be the one you would attend.

This symposium will start off with Mike Hichwa (above) and then transition to the aforementioned (too many big words tonight) Mr. Rice. He'll be accompanied by Jeff Smith of SQL Developer fame, Maria Colgan from the Optimzer team and Tom Kyte.

How'd we do? I think pretty darn good.

Don't forget to sign up. Early Bird Registration ends on March 25, 2013. Save $300.
Categories: BI & Warehousing

Starbucks 1TB cube in production

Keith Laker - Tue, 2013-03-12 14:41
Check out the customer snapshot Oracle has published which describes the success Starbucks Coffee has achieved by moving their data warehouse to the Exadata platform, leveraging the Oracle Database OLAP Option and Oracle BIEE at the front end.    10,000 users in HQ and across thousands of store locations now have timely accurate and calculation rich information at their fingertips.


Starbucks Coffee Company Delivers Daily, Actionable Information to Store Managers, Improves Business Insight with High Performance Data Warehouse
( http://www.oracle.com/us/corporate/customers/customersearch/starbucks-coffee-co-1-exadata-ss-1907993.html )

By delivering extreme performance combined with the architectural simplicity and sophisticated multidimensional calculation power of the in-database analytics of the Database, Starbucks use of OLAP has enabled some outstanding results. Together with the power of other Oracle Database and Exadata benefits such as Partitioning, Hybrid Columnar Compression, Storage Indexes and Flash Memory, Starbucks is able to handle the constant growth in data volumes and end-user demands with ease.

A great example of the power of the "Disk To Dashboard" capability of Oracle Business Analytics.
Categories: BI & Warehousing

OER for Fusion Application

Oracle e-Business Suite - Mon, 2013-03-11 12:47

Replacement of ETRM/IREP for Fusion Application is Oracle Enterprise Repository. You can access this using following link.
https://fusionappsoer.oracle.com/

What Is OER?

Very simply this is a standalone catalog of technical information about Oracle’s Application products.  For E-Business Suite users it equates to the iRepository tool (http://irep.oracle.com/index.html), or for PeopleSoft its similar to the PeopleSoft Interactive Services Repository.

It contains a wealth of information, with the primary purpose of facilitating the creation of Application to Application integrations, and creating extensions and customizations. With this detailed technical knowledge of the inner workings and API’s available for Oracle Applications a better level of code reuse and overall accuracy can be achieved.

Accessing OER

Access is available either from Oracle’s globally shared public OER instance, or as part of your local Fusion Application instance deployment. Detail on creating a local OER installation is found in Oracle Fusion Middleware Installation Guide for Oracle Enterprise Repository (E15745-07). The URL’s for OER will be:

An OER login may be required, although Oracle’s public instance also supports guest access at this time.

OER catalogs technical components by various attributes, with the key ones being NameType, and Logical Business Area (LBA).  LBA is the lower level of the Fusion Applications Taxonomy and is used to tag each technical object with the feature and product that it is owned by and associated with.

The general keyword search actually uses indexes of all the fields/attributes associated with a entry.

Whilst the basic Asset Search should suffice in most cases, and is a simpler UI, the Browse feature (IE required) provides many powerful features and graphical views, including an object hierarchy and the Navigator to display objects related to each other

.

Oracle Enterprise Repository

Oracle Enterprise Repository

How To Get The Most From Oracle Enterprise Repository For Troubleshooting Fusion Applications [ID 1399910.1]


Categories: APPS Blogs

7 things that can go wrong with Ruby 1.9 string encodings

Raimonds Simanovskis - Sun, 2013-03-10 17:00

Good news, I am back in blogging :) In recent years I have spent my time primarily on eazyBI business intelligence application development where I use JRuby, Ruby on Rails, mondrian-olap and many other technologies and libraries and have gathered new experience that I wanted to share with others.

Recently I did eazyBI migration from JRuby 1.6.8 to latest JRuby 1.7.3 version as well as finally migrated from Ruby 1.8 mode to Ruby 1.9 mode. Initial migration was not so difficult and was done in one day (thanks to unit tests which caught majority of differences between Ruby 1.8 and 1.9 syntax and behavior).

But then when I thought that everything is working fine I got quite many issues related to Ruby 1.9 string encodings which unfortunately were not identified by test suite and also not by my initial manual tests. Therefore I wanted to share these issues which might help you to avoid these issues in your Ruby 1.9 applications.

If you are new to Ruby 1.9 string encodings then at first read, for example, tutorials about Ruby 1.9 String and Ruby 1.9 Three Default Encodings, as well as Ruby 1.9 Encodings: A Primer and the Solution for Rails is useful.

1. Encoding header in source files

I will start with the easy one - if you use any Unicode characters in your Ruby source files then you need to add

# encoding: utf-8

magic comment line in the beginning of your source file. This was easy as it was caught by unit tests :)

2. Nokogiri XML generation

The next issues were with XML generation using Nokogiri gem when XML contains Unicode characters. For example,

require "nokogiri"
doc = Nokogiri::XML::Builder.new do |xml|
  xml.dummy :name => "āčē"
end
puts doc.to_xml

will give the following result when using MRI 1.9:

<?xml version="1.0"?>
<dummy name="&#x101;&#x10D;&#x113;"/>

which might not be what you expect if you would like to use UTF-8 encoding also for Unicode characters in generated XML file. If you execute the same ruby code in JRuby 1.7.3 in default Ruby 1.9 mode then you get:

<?xml version="1.0"?>
<dummy name="āčē"/>

which seems OK. But actually it is not OK if you look at generated string encoding:

doc.to_xml.encoding # => #<Encoding:US-ASCII>
doc.to_xml.inspect  # => "<?xml version=\"1.0\"?>\n<dummy name=\"\xC4\x81\xC4\x8D\xC4\x93\"/>\n"

In case of JRuby you see that doc.to_xml encoding is US-ASCII (which is 7 bit encoding) but actual content is using UTF-8 8-bit encoded characters. As a result you might get ArgumentError: invalid byte sequence in US-ASCII exceptions later in your code.

Therefore it is better to tell Nokogiri explicitly that you would like to use UTF-8 encoding in generated XML:

doc = Nokogiri::XML::Builder.new(:encoding => "UTF-8") do |xml|
  xml.dummy :name => "āčē"
end
doc.to_xml.encoding # => #<Encoding:UTF-8>
puts doc.to_xml
<?xml version="1.0" encoding="UTF-8"?>
<dummy name="āčē"/>
3. CSV parsing

If you do CSV file parsing in your application then the first thing you have to do is to replace FasterCSV gem (that you probably used in Ruby 1.8 application) with standard Ruby 1.9 CSV library.

If you process user uploaded CSV files then typical problem is that even if you ask to upload files in UTF-8 encoding then quite often you will get files in different encodings (as Excel is quite bad at producing UTF-8 encoded CSV files).

If you used FasterCSV library with non-UTF-8 encoded strings then you get ugly result but nothing will blow up:

FasterCSV.parse "\xE2"
# => [["\342"]]

If you do the same in Ruby 1.9 with CSV library then you will get ArgumentError exception.

CSV.parse "\xE2"
# => ArgumentError: invalid byte sequence in UTF-8

It means that now you need to rescue and handle ArgumentError exceptions in all places where you try to parse user uploaded CSV files to be able to show user friendly error messages.

The problem with standard CSV library is that it is not handling ArgumentError exceptions and is not wrapping them in MalformedCSVError exception with information in which line this error happened (as it is done with other CSV format exceptions) which makes debugging very hard. Therefore I also "monkey patched" CSV#shift method to add ArgumentError exception handling.

4. YAML serialized columns

ActiveRecord has standard way how to serialize more complex data types (like Array or Hash) in database text column. You use serialize method to declare serializable attributes in your ActiveRecord model class definition. By default YAML format (using YAML.dump method for serialization) is used to serialize Ruby object to text that is stored in database.

But you can get big problems if your data contains string with Unicode characters as YAML implementation significantly changed between Ruby 1.8 and 1.9 versions:

  • Ruby 1.8 used so-called Syck library
  • JRuby in 1.8 mode used Java based implementation that tried to ack like Syck
  • Ruby 1.9 and JRuby in 1.9 mode use new Psych library

Lets try to see results what happens with YAML serialization of simple Hash with string value which contains Unicode characters.

On MRI 1.8:

YAML.dump({:name => "ace āčē"})
# => "--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n"

On JRuby 1.6.8 in Ruby 1.8 mode:

YAML.dump({:name => "ace āčē"})
# => "--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n"

On MRI 1.9 or JRuby 1.7.3 in Ruby 1.9 mode:

YAML.dump({:name => "ace āčē"})
# => "---\n:name: ace āčē\n"

So as we see all results are different. But now lets see what happens after we migrated our Rails application from Ruby 1.8 to Ruby 1.9. All our data in database is serialized using old YAML implementations but now when loaded in our application they are deserialized back using new Ruby 1.9 YAML implementation.

When using MRI 1.9:

YAML.load("--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n")
# => {:name=>"ace \xC4\x81\xC4\x8D\xC4\x93"}
YAML.load("--- \n:name: !binary |\n  YWNlIMSBxI3Ekw==\n\n")[:name].encoding
# => #<Encoding:ASCII-8BIT>

So the string that we get back from database is no more in UTF-8 encoding but in ASCII-8BIT encoding and when we will try to concatenate it with UTF-8 encoded strings we will get Encoding::CompatibilityError: incompatible character encodings: ASCII-8BIT and UTF-8 exceptions.

When using JRuby 1.7.3 in Ruby 1.9 mode then result again will be different:

YAML.load("--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n")
# => {:name=>"ace Ä\u0081Ä\u008DÄ\u0093"}
YAML.load("--- \n:name: \"ace \\xC4\\x81\\xC4\\x8D\\xC4\\x93\"\n")[:name].encoding
# => #<Encoding:UTF-8>

So now result string has UTF-8 encoding but the actual string is damaged. It means that we will not even get exceptions when concatenating result with other UTF-8 strings, we will just notice some strange garbage instead of Unicode characters.

The problem is that there is no good solution how to convert your database data from old YAML serialization to new one. In MRI 1.9 at least it is possible to switch back YAML to old Syck implementation but in JRuby 1.7 when using Ruby 1.9 mode it is not possible to switch to old Syck implementation.

Current workaround that I did is that I made modified serialization class that I used in all model class definitions (this works in Rails 3.2 and maybe in earlier Rails 3.x versions as well):

serialize :some_column, YAMLColumn.new

YAMLColumn implementation is a copy from original ActiveRecord::Coders::YAMLColumn implementation. I modified load method to the following:

def load(yaml)
  return object_class.new if object_class != Object && yaml.nil?
  return yaml unless yaml.is_a?(String) && yaml =~ /^---/
  begin
    # if yaml sting contains old Syck-style encoded UTF-8 characters
    # then replace them with corresponding UTF-8 characters
    # FIXME: is there better alternative to eval?
    if yaml =~ /\\x[0-9A-F]{2}/
      yaml = yaml.gsub(/(\\x[0-9A-F]{2})+/){|m| eval "\"#{m}\""}.force_encoding("UTF-8")
    end
    obj = YAML.load(yaml)

    unless obj.is_a?(object_class) || obj.nil?
      raise SerializationTypeMismatch,
        "Attribute was supposed to be a #{object_class}, but was a #{obj.class}"
    end
    obj ||= object_class.new if object_class != Object

    obj
  rescue *RESCUE_ERRORS
    yaml
  end
end

Currently this patched version will work on JRuby where just non-ASCII characters are replaced by \xNN style fragments (byte with hex code NN). When loading existing data from database we check if it has any such \xNN fragment and if yes then these fragments are replaced with corresponding UTF-8 encoded characters. If anyone has better suggestion for implementation without using eval then please let me know in comments :)

If you need to create something similar for MRI then you would probably need to search if database text contains !binary | fragment and if yes then somehow transform it to corresponding UTF-8 string. Anyone has some working example for this?

5. Sending binary data with default UTF-8 encoding

I am using spreadsheet gem to generate dynamic Excel export files. The following code was used to get generated spreadsheet as String:

book = Spreadsheet::Workbook.new
# ... generate spreadsheet ...
buffer = StringIO.new
book.write buffer
buffer.seek(0)
buffer.read

And then this string was sent back to browser using controller send_data method.

The problem was that in Ruby 1.9 mode by default StringIO will generate strings with UTF-8 encoding. But Excel format is binary format and as a result send_data failed with exceptions that UTF-8 encoded string contains non-UTF-8 byte sequences.

The fix was to set StringIO buffer encoding to ASCII-8BIT (or you can use alias BINARY):

buffer = StringIO.new
buffer.set_encoding('ASCII-8BIT')

So you need to remember that in all places where you handle binary data you cannot use strings with default UTF-8 encoding but need to specify ASCII-8BIT encoding.

6. JRuby Java file.encoding property

Last two issues were JRuby and Java specific. Java has system property file.encoding which is not related just to file encoding but determines default character set and string encoding in many places.

If you do not specify file.encoding explicitly then Java VM on startup will try to determine its default value based on host operating system "locale". On Linux it might be that it will be set to UTF-8, on Mac OS X by default it will be MacRoman, on Windows it will depend on Windows default locale setting (which will not be UTF-8). Therefore it is always better to set explicitly file.encoding property for Java applications (e.g. using -Dfile.encoding=UTF-8 command line flag).

file.encoding will determine which default character set java.nio.charset.Charset.defaultCharset() method call will return. And even if you change file.encoding property during runtime it will not change java.nio.charset.Charset.defaultCharset() result which is cached during startup.

JRuby uses java.nio.charset.Charset.defaultCharset() in very many places to get default system encoding and uses it in many places when constructing Ruby strings. If java.nio.charset.Charset.defaultCharset() will not return UTF-8 character set then it might result in problems when using Ruby strings with UTF-8 encoding. Therefore in JRuby startup scripts (jruby, jirb and others) file.encoding property is always set to UTF-8.

So if you start your JRuby application in standard way using jruby script then you should have file.encoding set to UTF-8. You can check it in your application using ENV_JAVA['file.encoding'].

But if you start your JRuby application in non-standard way (e.g. you have JRuby based plugin for some other Java application) then you might not have file.encoding set to UTF-8 and then you need to worry about it :)

7. JRuby Java string to Ruby string conversion

I got file.encoding related issue in eazyBI reports and charts plugin for JIRA. In this case eazyBI plugin is OSGi based plugin for JIRA issue tracking system and JRuby is running as a scripting container inside OSGi bundle.

JIRA startup scripts do not specify file.encoding default value and as a result it typically is set to operating system default value. For example, on my Windows test environment it is set to Windows-1252 character set.

If you call Java methods of Java objects from JRuby then it will automatically convert java.lang.String objects to Ruby String objects but Ruby strings in this case will use encoding based on java.nio.charset.Charset.defaultCharset(). So even when Java string (which internally uses UTF-16 character set for all strings) can contain any Unicode character it will be returned to Ruby not as string with UTF-8 encoding but in my case will return with Windows-1252 encoding. As a result all Unicode characters which are not in this Windows-1252 character set will be lost.

And this is very bad because everywhere else in JIRA it does not use java.nio.charset.Charset.defaultCharset() and can handle and store all Unicode characters even when file.encoding is not set to UTF-8.

Therefore I finally managed to create a workaround which forces that all Java strings are converted to Ruby strings using UTF-8 encoding.

I created custom Java string converter based on standard one in org.jruby.javasupport.JavaUtil class:

package com.eazybi.jira.plugins;

import org.jruby.javasupport.JavaUtil;
import org.jruby.Ruby;
import org.jruby.RubyString;
import org.jruby.runtime.builtin.IRubyObject;

public class RailsPluginJavaUtil {
    public static final JavaUtil.JavaConverter JAVA_STRING_CONVERTER = new JavaUtil.JavaConverter(String.class) {
        public IRubyObject convert(Ruby runtime, Object object) {
            if (object == null) return runtime.getNil();
            // PATCH: always convert Java string to Ruby string with UTF-8 encoding
            // return RubyString.newString(runtime, (String)object);
            return RubyString.newUnicodeString(runtime, (String)object);
        }
        public IRubyObject get(Ruby runtime, Object array, int i) {
            return convert(runtime, ((String[]) array)[i]);
        }
        public void set(Ruby runtime, Object array, int i, IRubyObject value) {
            ((String[])array)[i] = (String)value.toJava(String.class);
        }
    };
}

Then in my plugin initialization Ruby code I dynamically replaced standard Java string converter to my customized converter:

java_converters_field = org.jruby.javasupport.JavaUtil.java_class.declared_field("JAVA_CONVERTERS")
java_converters_field.accessible = true
java_converters = java_converters_field.static_value.to_java
java_converters.put(java.lang.String.java_class, com.eazybi.jira.plugins.RailsPluginJavaUtil::JAVA_STRING_CONVERTER)

And as a result now all Java strings that were returned by Java methods were converted to Ruby strings using UTF-8 encoding and not using encoding from file.encoding Java property.

Final thoughts

My main conclusions from solving all these string encoding issues are the following:

  • Use UTF-8 encoding as much as possible. Handling conversions between different encodings will be much more harder than you will expect.
  • Use example strings with Unicode characters in your tests. I didn't identify all these issues initially when running tests after migration because not all tests were using example strings with Unicode characters. So next time instead of using "dummy" string in your test use "dummy āčē" everywhere :)

And please let me know (in comments) if you have better or alternative solutions for the issues that I described here.

Categories: Development

Approvals in Fusion Procurement

Oracle e-Business Suite - Fri, 2013-03-08 02:08
Key features exist in Fusion Procurement approvals

There are many useful features that can be used with Fusion Procurement. Here are just some of the more significant examples:

  • Both Serial and Parallel Approval for all document types.
  • Various ways to configure responses, including features like first responder wins to help avoid lengthy processing times.
  • Notification by email as well as several rich dashboard components (e.g. worklist) to show items currently awaiting action.
  • Expiration, reminder and escalation features on pending actions.
  • Delegation and vacation rules to forward actions to dedicated proxies as needed
  • Rich notification content including clickable links to go directly to document details.
  • Wide range of attribute values to use in the creation of custom approval processing rules.
What hierarchies can be used to generate approval lists?

Fusion Applications provides standard support for the following methods to derive the list of approvers

  • Supervisor Hierarchy. This uses the HCM employee definition, leveraging the specific assignment of a supervisor person to each employee. AMX engine calls HCM to request the users that are in the hierarchy and passes in a starting position (normally the person submitting the purchasing document) and the maximum number of levels in the hierarchy to climb up.
  • Position Hierarchy. This uses the Job definitions in HCM and selects all employees tied to the position that gets included in the selected hierarchy. Again this accepts the starting position (person) and the top job level to climb to before completing.
  • Approval Group is simple a group of predefined people. This can be a static list or may be custom to generate at run-time based on approval document attribute data.
  • Job Level. This works very similarly to the Position Hierarchy whereby it uses start and end Job definitions to traverse the hierarchy and select approvers.
What is Approvals Management (AMX)

Approvals Management (AMX) is an independent product that comes from the Fusion Middleware SOA Server, and provides general approval services to any product in Fusion Applications. AMX can be considered the meeting point between the powerful and flexible Oracle Business Rules capability and the advanced process control capabilities of Oracle Human Workflow.
The BPEL process that controls approvals has points at which it invokes the AMX services to initiate the approval process. The BPEL process controls the procurement process around approvals, however AMX and Human Workflow are responsible for the entire approval process.

AMX features include:

  • The execution of rules that govern the selection and generation of approver lists.
  • The sending of notifications to the participants on the generated list.
  • The processing of responses from those approvers, and selection of appropriate next approval action.
  • The return of the completion status back to the Procurement BPEL process for actioning.

Still eRecords and eSignatures are not supported by AMX in version 1.0.

For Complete Details Please refer to Document Approval in Fusion Procurement Products [ID 1319614.1]


Categories: APPS Blogs

OWB Repository Install on RAC using OMBPlus

Antonio Romero - Thu, 2013-03-07 17:33

There are few documents on the Oracle Support site http://support.oracle.com  to check if OWB is installed correctly on RAC and Exadata (Doc ID 455999.1) and How to Install a Warehouse Builder Repository on a RAC (Doc ID 459961.1).

 This blog will just show you how to install the OWB repository on RAC using OMBPlus.

The steps are:

  • Database preparation
  • Repository installation on the first Node
  • Copy of the rtrepos.properties File on all Nodes
  • Registration of all the other Nodes
  • Check the installation
Step 1: Database preparation

UNIQUE Service Name
Make sure that EACH Node in the RAC has a UNIQUE service_name. If this is not the case, then add a unique service_name with the following command:

srvctl add service -d dbname -s instn -r instn

The resulting service name is instn.clusterdomainname. For example, if the instance name is racsrvc1,then the service name could be racsrvc1.us.oracle.com.

"srvctl" can be used to manage the RAC services:
srvctl [status|stop|start] service -d <db> -s <service-name>

Details are described in the OWB Installation Guide:
Paragraph "Ensuring the Availability of Service Names for Oracle RAC Nodes"

LISTENER Configuration
Make sure that EACH Node has the LISTENER configured correctly. The listener on each Node should be able to manage connections to the unique database service of each Node.

Step 2: Repository installation on the first Node

We assume that RAC has 2 Nodes: NODE1 and NODE2, the database instance is setup and the OWB software has been installed on all Nodes of the RAC. 

Start the OMBPlus shell on the primary node say Node 1 from <OWB_HOME>/owb/bin/unix/OMBPlus.sh

Execute the following command

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

OMB+> OMBSEED DATA_TABLESPACE 'USERS' INDEX_TABLESPACE 'INDX' TEMPORARY_TABLESPACE 'TEMP' SNAPSHOT_TABLESPACE 'USERS' USING CREDENTIAL OWBSYS/PASSWORD@hostname:1521:servicename

OWB repository seeding completed.

OMB+> exit

 Step 3: Copy of the rtrepos.properties File on all Nodes

 During the Repository seeding, a file rtrepos.properties is created/updated on Node 1 at location  <OWB_HOME>\ owb\bin\admin directory. This file should be copied to all RAC Nodes to the same location. In this case to Node 2 at  <OWB_HOME>\ owb\bin\admin.

Step 4: Registration of all the other Nodes

After the Repository installation, all RAC Nodes should be registered. This to enable the OWB Runtime Service to fail over to one of the other Nodes when required (e.g. because of a node crash). This registration process consists of an update in tables OWBRTPS and WB_RT_SERVICE_NODES. These tables will be updated with Node specific details like the Oracle_Home where the OWB software has been installed on the Node and and host, port, service connection details for the instance running on the Node.  

OMB+>OMBINSTALL OWB_RAC USING CREDENTIAL OWBSYS/OWBSYS@localhost:1521:service_name

RAC instance has beeb registered.

Step 5: Check the installation

Check the owb home values in the following tables are correct.

Select * from owbsys.owbrtps

Select * from owbsys.wb_rt_service_nodes.

Connect as the OWBSYS to the unique service net_service on each node and execute the script located in the <OWB_HOME>\ owb\rtp\sql directory

SQL>@show_service.sql
Available
PL/SQL procedure successfully completed. 

If the service is not available start the service using the following script

SQL>@start_service.sql
Available

Your installation of the OWB repository is now complete.

You can also use the following OMBplus commands to create a OWB WORKSPACE and workspace owner.

In SQL*Plus as sysdba

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

create user WORKSPACE_OWNER identified by PASSWORD;

grant resource, connect to WORKSPACE_OWNER;

grant OWB_USER to WORKSPACE_OWNER;

grant create session to WORKSPACE_OWNER;

In OMBPlus

OMB+> OMBINSTALL WORKSPACE 'WORKSPACE_WH' USING CREDENTIAL WORKSPACE_OWNER/PASSWORD@hostname:1521:service_name

Workspace has been created.

OMB+> exit



OWB - Securing your data with Transparent Data Encryption

Antonio Romero - Thu, 2013-03-07 12:40

Oracle provides a secure and convenient functionality for securing data in your datawarehouse, tables can be designed in OWB utilizing the Transparent Data Encryption capability. This is done by configuring specific columns in a table to use encryption.

When users insert data, the Oracle database transparently encrypts it and stores it in the column.  Similarly, when users select the column, the database automatically decrypts it.  Since all this is done transparently without any change the application code, the feature has an appropriate name: Transparent Data Encryption. 

Encryption requires users to apply an encryption algorithm and an encryption key to the clear-text input data. And to successfully decrypt an encrypted value, users must know the value of the same algorithm and key. In Oracle database, users can specify an entire tablespace to be encrypted, or selected columns of a table. From OWB we support column encryption that can be applied to tables and external tables.

We secure the capture of the password for encryption in an OWB location, just like other credentials. This is then used later in the configuration of the table.

We can configure a table and for columns define any encryption, including the encryption algorithm, integrity algorithm and the password.

 Then when the table is deployed from OWB, the TDE information is incorporated into the DDL for the table;

When data is written to this column it is encrypted on disk. Read more about this area in the Oracle Advanced Security white paper on Transparent Data Encryption Best Practices here.

Oracle Linux 6.4 Announced

Asif Momen - Thu, 2013-03-07 10:15
The Oracle Linux team has announced the availability of Oracle Enterprise Linux (OL) 6.4. You can download OEL-6.4 from Oracle's EDelivery website (the link is below):

https://edelivery.oracle.com/EPD/Search/handle_go

To learn more about OL-6.4 click on the below link.

http://docs.oracle.com/cd/E37670_01/E39522/html/

Happy downloading!!! 

How to Find Software Versions and Patches in an Oracle Business Intelligence Applications Environment

Oracle e-Business Suite - Thu, 2013-03-07 01:20

This MOS Note will help consultants to find the exact Version and Patch level for installed components. This is very useful when you are logging an Oracle Service Request.  

 

OBIA: How to Find Software Versions and Patches in an Oracle Business Intelligence Applications Environment? [ID 1519745.1]


Categories: APPS Blogs

Easy application development with Couchbase, Angular and Node

Tugdual Grall - Wed, 2013-03-06 04:35
Note : This article has been written in March 2013, since Couchbase and its drivers have a changed a lot. I am not working with/for Couchbase anymore, with no time to udpate the code. A friend of mine wants to build a simple system to capture ideas, and votes. Even if you can find many online services to do that, I think it is a good opportunity to show how easy it is to develop new Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com2

Little Changed

David Aldridge - Tue, 2013-03-05 04:05
Incredible and depressing to see people still getting Oracle internals as wrong as this.
Categories: BI & Warehousing

Security Alert CVE-2013-1493 Released

Oracle Security Team - Mon, 2013-03-04 14:46

Hello, this is Eric Maurice.

Today Oracle released Security Alert CVE-2013-1493 to address two vulnerabilities affecting Java running in web browsers (CVE-2013-1493 and CVE-2013-0809).  One of these vulnerabilities (CVE-2013-1493) has recently been reported as being actively exploited by attackers to maliciously install the McRat executable onto unsuspecting users’ machines.  Both vulnerabilities affect the 2D component of Java SE.  These vulnerabilities are not applicable to Java running on servers, standalone Java desktop applications or embedded Java applications.  They also do not affect Oracle server-based software.  These vulnerabilities have each received a CVSS Base Score of 10.0.

Though reports of active exploitation of vulnerability CVE-2013-1493 were recently received, this bug was originally reported to Oracle on February 1st 2013, unfortunately too late to be included in the February 19th release of the Critical Patch Update for Java SE. 

The company intended to include a fix for CVE-2013-1493 in the April 16, 2013 Critical Patch Update for Java SE (note that Oracle recently announced its intent to have an additional Java SE security release on this date in addition to those previously scheduled in June and October of 2013).  However, in light of the reports of active exploitation of CVE-2013-1493, and in order to help maintain the security posture of all Java SE users, Oracle decided to release a fix for this vulnerability and another closely related bug as soon as possible through this Security Alert.

As always, Oracle recommends that this Security Alert be applied as soon as possible.  Desktop users can install this new version from java.com or through the Java autoupdate. Desktop users should also be aware that Oracle has recently switched Java security settings to “high” by default.  This high security setting results in requiring users to expressly authorize the execution of applets which are either unsigned or are self-signed.  As a result, unsuspecting users visiting malicious web sites will be notified before an applet is run and will gain the ability to deny the execution of the potentially malicious applet.  In order to protect themselves, desktop users should only allow the execution of applets when they expect such applets and trust their origin.

As stated in previous blogs, Oracle is committed to accelerating the release of security fixes for Java SE, particularly to help address the security-worthiness of Java running in browsers.  The quick release of this Security Alert, the higher number of Java SE fixes included in recent Critical Patch Updates, and the announcement of an additional security release date for Java SE (the April 16th Critical Patch Update for Java SE) are examples of this commitment.

<?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" /> 

For more information:

The Advisory for Security Alert CVE-2013-1493 can be found at http://www.oracle.com/technetwork/topics/security/alert-cve-2013-1493-1915081.html

More information about Oracle Software Security Assurance can be found at http://www.oracle.com/us/support/assurance/index.html. 

Exporting Multiple Tables on a Common Filter

Asif Momen - Mon, 2013-03-04 05:59

To be frank, I consider myself novice when it comes to advanced export/import requirements. This is because I don’t deal with these utilities on a day-to-day basis.

A simple requirement came across my desk to export selected tables from a schema based on a common filter.

Requirement:
Say, you have 5 tables T1, T2, T3, T4, and T5. All have “ID” as the primary key column and you have to export data from these tables only if it is found in COMMON_TABLE. The COMMON_TABLE stores “ID” to be exported.

Solution:
The first place that I look for solution is “Oracle Documentation”. I knew we can filter a table using “QUERY” parameter of Data Pump Export but did not know how to apply it on multiple tables.

The syntax of the QUERY parameter is:

QUERY = [schema.][table_name:] query_clause

If you omit [schema.][table_name:] then the query is applied to all the tables in the export job.

So, here’s my export command:

expdp test/test DIRECTORY=data_pump_dir TABLES=t1,t2,t3,t4,t5 DUMPFILE=test.dmp QUERY=\"WHERE id IN \(SELECT common_table.id FROM common_table\)\"

You may click here to read more about the QUERY parameter of Data Pump Export.

Thanks for reading!!!

APEX Training 15.04. - 17.04.2013

Dietmar Aust - Mon, 2013-03-04 00:56
Wie jedes Jahr in den letzten sechs Jahren, veranstalten wir (Denes Kubicek und ich) unser

Oracle APEX: Knowhow aus der PraxisTraining in Bensheim an der Bergstrasse. Wir werden unsere bisherigen Themen überarbeiten und einige neue Themen hinzufügen. So werde ich auch folgende neue Themen in das Programm der Schulung aufnehmen:

- jQuery (Beispiele und Übungen)
- APEX Collections
- Erstellung von komplexen Forms
- APEX und Mehrsprachigkeit

Wir haben dieses Mal einen ganz speziellen Gast zu der Schulung eingeladen - Christian Rokitta aus den Niederlanden. Er ist ein Experte in Sachen Layoutgestalltung und Mobile Applikationen. Die Teilnehmer der Schulung werden sich gemeinsam für eins dieser Themen entscheiden und Christian wird es vortragen.

Unser Highlight sind auf jeden Fall die abendlichen  Q & A Session, in denen die Teilnehmer die Gelegenheit bekommen ihre eigenen Projekte vorzustellen und ihre konkrete Probleme mit uns zu diskutieren.

Die Anmeldung zur Schulung finden Sie hier.


Viele Grüße,
~Dietmar.

Make sure you know the second shot

TalentedApps - Sat, 2013-03-02 02:38

4223373030_df7722f9f7_bTONY MENDEZ: Can you can teach somebody to be a director in a day?

JOHN CHAMBERS: You can teach a rhesus monkey to be a director in a day.

– from the movie, “Argo”

Ben Affleck received a Golden Globe for Best Director for “Argo” and was interviewed on NPR’s Fresh Air. It’s a great interview and Affleck comes across as very intelligent and articulate and relates some interesting and funny stories about making Argo and about his career. One of the stories was about his experience as a first time director of “Gone Baby Gone.” Interviewer Terry Gross asked him what it was like as a first time director. Affleck talked about some advice he got from Kevin Costner, who had received an Oscar for directing his first film, “Dances with Wolves”:

AFFLECK: Yeah. I talked to – the one advantage I had is I could talk to other people who have done it. And I remember talking to Kevin Costner and saying, like, what do I do? I’m going to direct a movie. Kevin said: Make sure that on your first day, you know what your second shot is. And I was, like, what you mean? He said, everyone goes there and knows what their first shot is, and they do the first shot, and all of a sudden think: What am I going to do now? He’s, like, make sure you know the second shot, and that’ll get you rolling into, you know, we’re going to do this. We finished that. OK, guys, let’s go over here. And now the crew trusts you.

A director is very often seen as a leader in that people and resources must be guided towards the completion of a project that achieves a vision. Costner’s advice to “make sure you know the second shot” is good advice that is delivered well. As a leader, you need to have some kind of a plan for how you are going to achieve a goal and knowing what you think you will do after the first step is the best sign that you have at least some sort of plan. And as a leader, the people you lead need to see that you have some sort of a plan. The way Costner delivered that advice was also good; he made it concrete and put it in the language of what Affleck was trying to be a leader in – making movies.

So often, you read a book about Leadership and the concepts, like “have a plan”, are stuck in abstraction. You think, “yeah, a plan makes sense.” But then you don’t know how that translates into specific behaviors for your particular situation and you ask, “What does that mean I’m supposed to do?”

If you are a first-time leader or even an experienced leader in a circumstance that you are not familiar with, find that person who has experience and get that concrete advice that describes what behaviors you need to do to demonstrate leadership in this particular situation. If you are offering advice, make sure you give it in concrete terms that the recipient can act upon.

Photo by jinterwas


Pages

Subscribe to Oracle FAQ aggregator