Feed aggregator

Installing Update Images in Amazon Web Services

Javier Delgado - Mon, 2016-04-18 02:22
The last PeopleSoft Update Manager (PUM) images have been delivered in two formats: the traditional VirtualBox image and a newly introduced format: NativeOS.



NativeOS takes advantage of PeopleTools 8.55 Deployment Packages (DPK), which is the cornerstone for the PeopleSoft Cloud Architecture. This new cloud architecture facilitates the deployment of PeopleSoft applications in the cloud, not only covering Oracle Public Cloud but also other providers such as Amazon Web Services (AWS), Google Cloud and Microsoft Azure.

Creating the AWS InstanceAt BNB we have been using Amazon Web Services for a while, so it was our natural choice for installing the PeopleSoft HCM Update Image #17. We have done so in a Windows 2012 server using the m4.large instance type, which allocates 2 vCPUs and 8 Gb of RAM. In terms of disk, we have allocated 200 Gb in order to have the needed space for the image download and installation.

Once the instance was created, we downloaded the NativeOS update image from My Oracle Support. Once of the good advantages of NativeOS deployments is that the size of the download is less than the traditional VirtualBox one. Still, the size is considerable, but the network throughput in AWS instances is quite good.

Before proceeding with the installation, you need to edit the c:\windows\system32\drivers\etc\hosts file in order to include the internal server name in it:

127.0.0.1 <server name>.<zone>.compute.internal

The full server name can normally be found in the desktop top right corner.

Once this is done, we are ready to proceed with the DPK installation. For further information on this, I suggest you check My Oracle Support.

Allowing External AccessIf you would like to access the PeopleSoft Update Image without connecting with remote desktop to the server, you will need to take some additional steps.

Firstly, you will need to edit the security group linked to your AWS instance so you allow incoming TCP connection at the 8000 port, which is the port used by the PeopleSoft Update Image web server by default.

On top of this, you will need to change the firewall setting in the Windows server itself. This is done within the Windows Firewall with Advance Security application, on which you need to define an inbound rule also allowing 8000 port TCP connections:


Finally, if you want to use the same IP address every time you use the AWS instance, you will need to define an Elastic IP and associate it with the server. This fixed IP address has an additional cost, but if you are planning to distribute the URL to access the PeopleSoft application to other people who does not have access to the AWS Console in order to check the current IP address, it may be the only way to go.


installing Tomcat on Docker

Pat Shuff - Mon, 2016-04-18 02:07
A different way of looking at running Tomcat is to ignore the cloud platform and install and configure everything inside a docker instance. Rather than picking a cloud instance we are going to run this inside VirtualBox and assume that all of the cloud vendors will allow you to run Docker or configure a Docker instance on a random operating system. What we did was to initially install and configure Oracle Enterprise Linux 7.0 from a iso into VirtualBox. We then installed Docker with the command and start the service
sudo yum install docker
sudo systemctl start docker

We can search for a Tomcat installation and pull it down to run. We find a Tomcat 7.0 version from the search and pull down the configuration

docker search tomcat
docker pull consol/tomcat-7.0

We can run the new image that we pulled down with the commands

docker run consol/tomcat-7.0
docker ps
The docker ps command allows us to look at the container id that is needed to find the ip address of the instance that is running in docker. In our example we see the container id is 1e381042bdd2. To pull the ip address we execute
docker inspect -f format='{{.NetworkSettings.IPAddress}}' 1e381042bdd2
This returns the ip address of 172.17.0.2 so we can open this ip address and port 8080 to see the Tomcat installation.

In summary, this was not much different than going through Bitnami. If you have access to docker containers in a cloud service then this might be an alternative. All three vendors not only support docker instances but all three have announced or have docker services available through IaaS. Time wise it did take a little longer because we had to download an operating system as well as Java and Tomcat. The key benefit is that we can create a master instance and create our own docker image to launch. We can script docker to restart if things fail and do more advanced options if we run out of resources. Overall, this might be worth researching as an alternative to provisioning and running services.

PHP OCI8 2.0.11 and 2.1.1 are available on PECL

Christopher Jones - Sun, 2016-04-17 23:51

I've released PHP OCI8 2.0.11 (for supported PHP 5.x versions) and 2.1.1 (for PHP 7) to PECL. Windows DLLs on PECL been built by the PHP release team. The updated OCI8 code has also been merged to the PHP source branches and should land in the future PHP 5.6.21 and PHP 7.0.7 source bundles, respectively.

PHP OCI8 2.1.1 fixes a bug triggered by using oci_fetch_all() with a query having more than eight columns. To install on PHP 7 via PECL, use pecl install oci8

PHP OCI8 2.0.11 has one fix for a bind regression with 32-bit PHP. To install on PHP 5.x use pecl install oci8-2.0.11

My old Underground PHP and Oracle Manual still contains a lot of useful information about using PHP with Oracle Database. Check it out!

Getting the current SQL statement from SYS_CONTEXT using Fine Grained Auditing

The Anti-Kyte - Sun, 2016-04-17 14:44

The stand-off between Apple and the FBI has moved on. In essence both sides have taken it in turns to refuse to tell each other how to hack an iPhone.

Something else that tends to tell little or nothing in the face of repeated interrogation is SYS_CONTEXT(‘userenv’, ‘current_sql’).
If you’re fortunate enough to be running on Enterprise Edition however, a Fine Grained Auditing Policy will loosen it’s tongue.

Consider the following scenario.
You’ve recently got a job as a database specialist with Spectre.
They’ve been expanding their IT department recently as the result of their “Global Surveillance Initiative”.

There’s not much of a view from your desk as there are no windows in the hollowed out volcano that serves as the Company’s HQ.
The company is using Oracle 12c Enterprise Edition.

Everything seems to be going along nicely until you suddenly get a “request” from the Head of Audit, a Mr Goldfinger.
The requirement is that any changes to employee data in the HR system are recorded, together with the statement executed to change each record.
Reading between the lines, you suspect that Mr White – head of HR – is not entirely trusted by the hierarchy.

Whilst journalling triggers are common enough, capturing the actual SQL used to make DML changes is a bit more of a challenge.
Explaining this to Mr Goldfinger is unlikely to be a career-enhancing move. You’re going to have to be a bit creative if you want to avoid the dreaded “Exit Interview” (followed by a visit to the Piranha tank).

First of all though….

Fine Grained Auditing Configuration

You need to do a quick check to make sure that Fine Grained Auditing is available and configured in the way you would expect.

Access to Fine Grained Auditing

FGA is a feature of Oracle Enterprise Edition.
If you were working on any other edition of the database, Oracle would tell you that FGA is not enabled. For example, running the following on Oracle Express Edition 11g…

begin
    dbms_fga.add_policy
    (
        object_schema => 'HR',
        object_name => 'DEPARTMENTS',
        policy_name => 'WATCHING YOU',
        audit_condition => null,
        statement_types => 'INSERT, UPDATE, DELETE'
    );
end;
/

… will result in the Oracle Database telling you your fish-food …

ERROR at line 1:
ORA-00439: feature not enabled: Fine-grained Auditing
ORA-06512: at "SYS.DBMS_FGA", line 20
ORA-06512: at line 2

You can avoid this embarrassment simply by checking what edition of Oracle you’re running :

select banner
from v$version
/

In the case of Oracle 12c, you’ll get :

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE	12.1.0.2.0	Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

If you don’t happen to work for a worldwide crime syndicate and/or don’t have access to an Enterprise Edition database, you can still have a play around by means of a Developer Day Virtual Box image.

Unified Auditing

The other thing you need to check is just where audit records are going to be written to. This is not so much a requirement for the solution being implemented here, but it is relevant to some of the examples that follow.

By default, unified auditing is not implemented in 12c and you can confirm this by running :

select value
from v$option
where parameter = 'Unified Auditing'
/

If the query returns FALSE, then Unified Auditing has not been enabled.
Otherwise, it’s probably worth taking a look at the documentation to see how this affects auditing behaviour in the database.

Initialization Parameters

Assuming Unified Auditing has not been configured, the location of the audit records will be dictated by the AUDIT_TRAIL initialization parameter. You can check this value as follows :

select value
from v$parameter
where name = 'audit_trail'
/

If the value is set to DB, or DB, EXTENDED then any FGA policies should write to the tables mentioned below.

Now to take a closer look at FGA…

How long before SYS_CONTEXT cracks ?

To test exactly when you will be able to retrieve the DML statement you’re interested in, you can knock up a quick test.

First, you need a table to audit against for testing purposes :

create table trigger_messages
(
    message varchar2(4000)
)
/

Next, a simple procedure to insert a record :

create or replace procedure add_message( i_msg in trigger_messages.message%type)
is
begin
    insert into trigger_messages(message) values( i_msg);
end;
/

Now for a DML trigger on the table :

create or replace trigger trg_msg
    for insert or update or delete 
    on trigger_messages
    compound trigger
    
    l_action varchar2(10);
    before statement is
    begin
        l_action := case when inserting then 'INSERT' when updating then 'UPDATE' else 'DELETE' end;
        dbms_output.put_line('Before Statement '||l_action);
        dbms_output.put_line( nvl( sys_context('userenv', 'current_sql'), 'My lips are sealed'));
    end before statement;
    
    before each row is
    begin
        dbms_output.put_line('Before Row '||l_action);
        dbms_output.put_line( nvl( sys_context('userenv', 'current_sql'), 'My lips are sealed'));
    end before each row;
    
    after each row is
    begin
        dbms_output.put_line('After Row '||l_action);
        dbms_output.put_line( nvl( sys_context('userenv', 'current_sql'), 'My lips are sealed'));
    end after each row;
    
    after statement is
    begin
        dbms_output.put_line('After Statement '||l_action);
        dbms_output.put_line( nvl( sys_context('userenv', 'current_sql'), 'My lips are sealed'));
    end after statement;
end trg_msg;
/

Next up, you need a procedure to serve as a handler for a Fine Grained Auditing event. The reason for this will become apparent when we run the test. Note that the signature for an FGA handler procedure is mandated :

create or replace procedure trg_msg_fga
(
    object_schema varchar2,
    object_name varchar2,
    policy_name varchar2
)
is
begin
    dbms_output.put_line('FGA Policy');
    dbms_output.put_line(sys_context('userenv', 'current_sql'));
    dbms_output.put_line(sys_context('userenv', 'current_bind'));
    dbms_output.put_line(sys_context('userenv', 'current_sql_length'));
end;
/

Now all that’s left to do is to create an FGA policy on the table :

begin
    dbms_fga.add_policy
    (
        object_schema => 'MIKE',
        object_name => 'TRIGGER_MESSAGES',
        policy_name => 'FIRING_ORDER',
        statement_types => 'INSERT, UPDATE, DELETE',
        handler_schema => 'MIKE',
        handler_module => 'TRG_MSG_FGA'
    );
end;
/

You can confirm that the policy has been created successfully and is enabled by querying DBA_AUDIT_POLICIES…

select object_schema, object_name, enabled,
    sel, ins, upd, del
from dba_audit_policies
where policy_owner = user
and policy_name = 'FIRING_ORDER'
/

OBJECT_SCHEMA	OBJECT_NAME	     ENABLED	SEL   INS   UPD   DEL
--------------- -------------------- ---------- ----- ----- ----- -----
MIKE		TRIGGER_MESSAGES     YES	NO    YES   YES   YES

Now you’re ready to test…

set serveroutput on size unlimited

begin 
    add_message('Spectre - at the cutting-edge of laser technology');
end;
/

update trigger_messages set message = 'Spectre - coming to a browser near you'
/

delete from trigger_messages
/

The results are quite interesting…

Before Statement INSERT
My lips are sealed
Before Row INSERT
My lips are sealed
FGA Policy
INSERT INTO TRIGGER_MESSAGES(MESSAGE) VALUES( :B1 )
#1(49):Spectre - at the cutting-edge of laser technology
51
After Row INSERT
My lips are sealed
After Statement INSERT
My lips are sealed

PL/SQL procedure successfully completed.

Before Statement UPDATE
My lips are sealed
Before Row UPDATE
My lips are sealed
FGA Policy
update trigger_messages set message = 'Spectre - coming to a browser near you'
78
After Row UPDATE
My lips are sealed
After Statement UPDATE
My lips are sealed

1 row updated.

Before Statement DELETE
My lips are sealed
Before Row DELETE
My lips are sealed
After Row DELETE
My lips are sealed
FGA Policy
delete from trigger_messages
28
After Statement DELETE
My lips are sealed

1 row deleted.

From this you conclude that :

  • sys_context is only populated with the current statement inside the fga handler procedure
  • the handler procedure is invoked prior to the after row event for inserts and updates, but not for deletes

At this point, you consider that it might just be simpler to interrogate the DBA_FGA_AUDIT_TRAIL view, which has also captured the DML statements we’ve just run :

select sql_text
from dba_fga_audit_trail
where policy_name = 'FIRING_ORDER'
order by timestamp
/  

SQL_TEXT
----------------------------------------------------------------------------------------------------------------------------------
INSERT INTO TRIGGER_MESSAGES(MESSAGE) VALUES( :B1 )
update trigger_messages set message = 'Spectre - coming to a browser near you'
delete from trigger_messages

Note – the bind values for the procedure call can be found in the SQL_BIND column of this view.

However, it’s worth noting that we haven’t actually commited the test transaction yet these records are still present.
They will remain there, even if the transaction is rolled back.

In the end, you decide that the best approach is a journalling trigger…

The Unnecessarily Slow Dipping Mechanism – the DML trigger

Due to the nature of the organization, Spectre doesn’t have employees. It has associates. This is reflected in the table that you need to audit :

create table associates
(
    emp_id number,
    emp_name varchar2(100),
    job_title varchar2(30)
)
/

The table to hold the audit trail will probably look something like this :

create table assoc_audit
(
    action varchar2(6),
    changed_by varchar2(30),
    change_ts timestamp,
    emp_id number,
    emp_name varchar2(100),
    job_title varchar2(30),
    statement varchar2(4000),
    binds varchar2(4000)
)
/

It’s worth pausing at this point to note that SYS_CONTEXT can report up to 32k of a statement.
It does this by splitting the statement into eight 4k chunks, available in the USERENV context variables CURRENT_SQL, CURRENT_SQL1…CURRENT_SQL7.
It also provides the length of the statement it currently holds in the CURRENT_SQL_LENGTH variable.
Therefore, you may consider having a 32k varchar statement column in the audit table ( if this is enabled on your database), or even a column for the contents of each of these variables.
For the sake of simplicity, plus the fact that none of the examples here are very large, you decide to stick with just the one 4k varchar column to hold the statement.

There’s a procedure for adding new records to the table :

create or replace procedure add_associate
(
    i_emp_id in associates.emp_id%type,
    i_name in associates.emp_name%type,
    i_job_title in associates.job_title%type
)
is
begin
    insert into associates( emp_id, emp_name, job_title)
    values( i_emp_id, i_name, i_job_title);
end;
/
    

In the real world this would probably be in a package, but hey, you’re working for Spectre.

Now we need a handler for the FGA policy that we’re going to implement. In order for the context values that are captured to be accessible to the trigger, this handler is going to be part of a package which includes a couple of package variables :

create or replace package assoc_fga_handler
as

    g_statement varchar2(4000);
    g_binds varchar2(4000);
    
    -- The procedure to be invoked by the FGA policy.
    -- Note that the signature for this procedure is mandatory
    procedure set_statement
    (
        object_schema varchar2,
        object_name varchar2,
        policy_name varchar2
    );
end assoc_fga_handler;
/

create or replace package body assoc_fga_handler
as

    procedure set_statement
    (
        object_schema varchar2,
        object_name varchar2,
        policy_name varchar2
    )
    is
    begin
        g_statement := sys_context('userenv', 'current_sql');
        g_binds := sys_context('userenv', 'current_bind');
    end set_statement;
end assoc_fga_handler;
/

Now for the trigger. You may notice some compromises here …

create or replace trigger assoc_aud
    for insert or update or delete on associates
    compound trigger

    type typ_audit is table of assoc_audit%rowtype index by pls_integer;
    tbl_audit typ_audit;
    l_idx pls_integer := 0;
    
    after each row is
    begin
        l_idx := tbl_audit.count + 1;
        tbl_audit(l_idx).action := case when inserting then 'INSERT' when updating then 'UPDATE' else 'DELETE' end;
        tbl_audit(l_idx).changed_by := user;
        tbl_audit(l_idx).change_ts := systimestamp;
        tbl_audit(l_idx).emp_id := case when inserting then :new.emp_id else :old.emp_id end;
        tbl_audit(l_idx).emp_name := case when inserting then :new.emp_name else :old.emp_name end;
        tbl_audit(l_idx).job_title := case when inserting then :new.job_title else :old.job_title end;
    end after each row;
    
    after statement is
    begin
        for i in 1..tbl_audit.count loop
            tbl_audit(i).statement := assoc_fga_handler.g_statement;
            tbl_audit(i).binds := assoc_fga_handler.g_binds;
        end loop;
        forall j in 1..tbl_audit.count
            insert into assoc_audit values tbl_audit(j);
        -- cleardown the array
        tbl_audit.delete;    
    end after statement;
end assoc_aud;
/

Due to the fact that the FGA policy is not fired until after an AFTER ROW trigger for a DELETE, we are only guaranteed to capture the CURRENT_SQL value in an AFTER STATEMENT trigger.
The upshot is that we’re left with a PL/SQL array which is not constrained by a LIMIT clause. In these circumstances it’s not too much of an issue, Spectre has quite a small number of employees…er…associates, so you’re not likely to end up with an array large enough to cause memory issues.
On a potentially larger volume of records you may well consider splitting the INSERT and UPDATE portions of the trigger so that you can limit the size of the arrays generated by these operations. For DELETEs however, it appears that we may well be stuck with this approach.
On a not entirely unrelated subject, Jeff Kemp has an interesting method of speeding up Journalling Triggers.

All that remains is for the FGA policy….

begin
    dbms_fga.add_policy
    (
        object_schema => 'MIKE',
        object_name => 'ASSOCIATES',
        policy_name => 'ASSOCIATES_DML',
        statement_types => 'INSERT, UPDATE, DELETE',
        handler_schema => 'MIKE',
        handler_module => 'ASSOC_FGA_HANDLER.SET_STATEMENT'
    );
end;
/

…and now you can test…

set serveroutput on size unlimited
--
-- Cleardown the tables before running the test
--
truncate table assoc_audit
/

truncate table associates
/

begin
    add_associate(1, 'Odd Job', 'HENCHMAN');
    add_associate(2, 'Jaws', 'HENCHMAN');
    add_associate(3, 'Mayday', 'HENCHWOMAN');
    add_associate(4, 'Ernst Stavro Blofeld', 'CRIMINAL MASTERMIND');
    add_associate(5, 'Emilio Largo', 'Deputy Evil Genius');
    
end;
/

insert into associates( emp_id, emp_name, job_title)
values(6, 'Hans', 'Bodyguard and Piranha keeper')
/

commit;

update associates
set job_title = 'VALET'
where emp_id = 1
/
commit;


delete from associates
where emp_id = 1
/

commit;

-- Spectre is an Equal Opportunities Employer...and I need a statement
-- affecting multiple rows to test so...
update associates
set job_title = 'HENCHPERSON'
where job_title in ('HENCHMAN', 'HENCHWOMAN')
/

commit;

It is with a sense of relief that, when you check the audit table after running this you find …

select action, emp_name, 
    statement, binds
from assoc_audit
order by change_ts
/

ACTION EMP_NAME 	    STATEMENT							 BINDS
------ -------------------- ------------------------------------------------------------ ----------------------------------------
INSERT Odd Job		    INSERT INTO ASSOCIATES( EMP_ID, EMP_NAME, JOB_TITLE) VALUES(  #1(1):1 #2(7):Odd Job #3(8):HENCHMAN
			     :B3 , :B2 , :B1 )

INSERT Jaws		    INSERT INTO ASSOCIATES( EMP_ID, EMP_NAME, JOB_TITLE) VALUES(  #1(1):2 #2(4):Jaws #3(8):HENCHMAN
			     :B3 , :B2 , :B1 )

INSERT Mayday		    INSERT INTO ASSOCIATES( EMP_ID, EMP_NAME, JOB_TITLE) VALUES(  #1(1):3 #2(6):Mayday #3(10):HENCHWOMAN
			     :B3 , :B2 , :B1 )

INSERT Ernst Stavro Blofeld INSERT INTO ASSOCIATES( EMP_ID, EMP_NAME, JOB_TITLE) VALUES(  #1(1):4 #2(20):Ernst Stavro Blofeld #3(
			     :B3 , :B2 , :B1 )						 19):CRIMINAL MASTERMIND

INSERT Emilio Largo	    INSERT INTO ASSOCIATES( EMP_ID, EMP_NAME, JOB_TITLE) VALUES(  #1(1):5 #2(12):Emilio Largo #3(18):Depu
			     :B3 , :B2 , :B1 )						 ty Evil Genius

INSERT Hans		    insert into associates( emp_id, emp_name, job_title)
			    values(6, 'Hans', 'Bodyguard and Piranha keeper')

UPDATE Odd Job		    update associates
			    set job_title = 'VALET'
			    where emp_id = 1

DELETE Odd Job		    delete from associates
			    where emp_id = 1

UPDATE Jaws		    update associates
			    set job_title = 'HENCHPERSON'
			    where job_title in ('HENCHMAN', 'HENCHWOMAN')

UPDATE Mayday		    update associates
			    set job_title = 'HENCHPERSON'
			    where job_title in ('HENCHMAN', 'HENCHWOMAN')


10 rows selected.

Looks like the Piranhas will be going hungry…for now !


Filed under: Oracle, PL/SQL, SQL Tagged: audit_trail initialization parameter, compound trigger, dba_fga_audit_trail, dbms_fga.add_policy, SYS_CONTEXT, sys_context current_bind, sys_context current_sql, sys_context current_sql_length, v$option, v$parameter, v$version

Online Relocation of Database File : ASM to FileSystem and FileSystem to ASM

Hemant K Chitale - Sun, 2016-04-17 10:44
There have been few published examples of the online datafile relocation feature in 12c.  The examples I've seen are on filesystem.

Here I show online relocation to/from ASM and FileSystem.

SQL> connect system/oracle
Connected.
SQL> create tablespace test_relocate;

Tablespace created.

SQL> create table test_relocate_tbl
2 tablespace test_relocate
3 as select * from dba_objects;

Table created.

SQL> select tablespace_name, bytes/1024
2 from user_segments
3 where segment_name = 'TEST_RELOCATE_TBL';

TABLESPACE_NAME BYTES/1024
------------------------------ ----------
TEST_RELOCATE 13312

SQL> select file_name, bytes/1024
2 from dba_data_files
3 where tablespace_name = 'TEST_RELOCATE';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1024
----------
+DATA/NONCDB/DATAFILE/test_relocate.260.909444793
102400


SQL>
SQL> alter database move datafile
2 '+DATA/NONCDB/DATAFILE/test_relocate.260.909444793'
3 to '/oradata/NONCDB/test_relocate_01.dbf';

Database altered.

SQL> !ls -l /oradata/NONCDB
total 102408
-rw-r----- 1 oracle asmdba 104865792 Apr 17 23:39 test_relocate_01.dbf

SQL> 
SQL> alter database move datafile  
2 '/oradata/NONCDB/test_relocate_01.dbf'
3 to '+DATA';

Database altered.

SQL> select file_name, bytes/1024
2 from dba_data_files
3 where tablespace_name = 'TEST_RELOCATE';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1024
----------
+DATA/NONCDB/DATAFILE/test_relocate.260.909445261
102400


SQL>
SQL> !ls -l /oradata/NONCDB
total 0

SQL>


Note that I was courageous enough to not use the KEEP keyword (which is optional !).
.
.
.

Categories: DBA Blogs

ADF 12c Custom Property Groovy and AllowUntrustedScriptAccess Annotation

Andrejus Baranovski - Sun, 2016-04-17 04:41
To execute Groovy expression in ADF 12c (to call Java method from Groovy), you must specify trusted mode. Read more about it in my previous post - ADF BC 12c New Feature - Entity-Level Triggers. Setting mode to trusted, works in most of the cases. It doesn't work if we want to execute Groovy expression (calling Java method in ViewRow or Entity class) for custom property. In a case of custom property and Groovy calling custom method, we need to annotate Java class with AllowUntrustedScriptAccess. This makes a trick and Groovy expression can call custom method.

To demonstrate the use case, I was using mandatory property. This is standard property to control if attribute is required or no. By default, mandatory property is static, but we can make it dynamic with Groovy expression. I have implemented a rule, where Salary attribute is required, if value is more than 5000:


Salary is not required, if value is less than 5000. This is just example, you can implement more complex logic:


There is a method in ViewRow class, to calculate required property for Salary attribute:


Method is callable from custom property Groovy expression, as we have set annotation AllowUntrustedScriptAccess for ViewRow class. Annotation definition, must list all allowed methods:


Here you can see Groovy expression, to call ViewRow class method - mandatory. Expression is assigned for custom property, this will be referenced from ADF UI:


There is issue with JDEV 12c, it fails to parse custom properties set with Groovy expressions. It fails to parse and removes custom property from the code. To prevent such behavior, specify empty string for custom property value (this will allow to keep Groovy expression):


Luckily in this case of mandatory check, JDEV by default generates mandatory property for ADF UI component. We simply override its original value with custom property and calculate new value in custom trusted method, referenced by Groovy expression:


Download sample application - ADF12cGroovyCustomPropertyApp.zip.

Links for 2016-04-16 [del.icio.us]

Categories: DBA Blogs

Video : Flashback Query

Tim Hall - Sat, 2016-04-16 09:28

Today’s video is a quick demo of flashback query.

If you prefer to read articles, rather than watch videos, you might be interested in these articles.

The cameo for this video comes courtesy of Dina Blaschczok, a DBA based in South Africa and a friend of the family. When the wife goes down to SA, Dina takes care of her and occasionally introduces her to big cats.

Hackathon weekend at Fishbowl Solutions – Google Vision, Slack, and Email Integrations with Oracle WebCenter

It’s hackathon weekend at Fishbowl Solutions. Fishbowl’s consulting and development teams – the hackers – along with members of the sales and marketing teams join forces to collaborate on and develop new software applications. While the overall goal of the hackathon may be to produce usable software, the event also is a great learning opportunity for participants and results in a lot of fun.

This is Fishbowl’s 4th annual hackathon and previous events have produced “beta” software that eventually evolved into shippable software components that benefited customers. Here are recaps on the 2012 and 2014 events.

This year there were over 16 different ideas, and out of those 3 teams were formed to develop the following:

  • Oracle WebCenter Portal and Slack integration – Slack is a popular collaboration tool for the enterprise that enables members to communicate across channels (specific topics), send direct messages, and drag and drop files for sharing. Integrating Slack with WebCenter Portal brings its popular features and ease of use directly in context of a user’s portal session, ensuring that collaboration is easy and reducing the amount of switching between applications to communicate with others – leaving the portal to send an email, for example.
IMG_1291

WebCenter Portal and Slack Integration Team – Andy Weaver and Dan Haugen

 

 

  • Oracle WebCenter Content and Google Vision integration – This integration would enable the tagging of images upon check-in. The Google Vision API enables applications to understand the content of images by encapsulating machine learning models in an easy to use REST API. Using this technology, images are auto-classified into thousands of categories (e.g., “sailboat”, “lion”, “Eiffel Tower”). For example, you might check in a picture of a knit hat and it would be tagged with xKeywords of “hat”, “knit hat”, and “fashion accessories” without any human tagging. To further automate image discovery, the GSA can be used to map related terms so that searches for “beanie”, “stocking cap”, or “winter hat”, could also return the image. This tagging automation would have great implications for Oracle WebCenter customers that are using it for Digital Asset Management.
IMG_1289

WebCenter Content and Google Vision Integration Team – Kim Negaard and Greg Bollom

 

  • Oracle WebCenter Content Email Check in - This integration would enable emails with attachments to be checked in to WebCenter Content automatically. Instead of the user having to check in the email itself, and then relating each attachment to the associated email, which results in additional check in steps, the emails and attachments would be parsed out and sent to a user workspaces in WebCenter. From there, users can tag and validate that the email should be checked in with the appropriate attachments – either from their desktops or mobile device.
WebCenter Content and Email Checkin Team Member - John Sim (fueling his hacking mind)

WebCenter Content and Email Checkin Team Member – John Sim (fueling his hacking mind)

The hacking commenced at 3 PM today and will continue until 4 PM on Saturday, April 16th. Each team will then present their developed integration/component, and the other Fishbowl team members will vote on their favorite finished product. Check back on this blog next week to see who won.

Happy hacking!

IMG_1287

Fishbowl Solutions Hackathon 2016 T-shirt

 

 

The post Hackathon weekend at Fishbowl Solutions – Google Vision, Slack, and Email Integrations with Oracle WebCenter appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

What’s Holding You Back From Giving Back?

Pythian Group - Fri, 2016-04-15 10:19

This week in honour of National Volunteer Week, I’m reflecting on the importance that volunteering has had in my life.

I’ve learned from each and every one of my experiences. From wrapping holiday presents at the mall, to helping source articles for an industry magazine, to wish granting, recruiting committee and board members, and providing HR advice and counsel. These experiences eventually led me to become a member of the board of directors for a number of organizations, and I was even named Board Chair.

Ironically, the rewards and benefits that I have received from the experiences far outweigh any amount of time I have given the many organizations I have supported over the years. Volunteering has provided me the opportunity to expand my skills and experience, and take on leadership roles long before I had the credentials to be hired for them. I initially started volunteering when I moved to Ottawa, and quickly learned that there is no better way to get to know your community, meet new people and expand your network. Once I started, I never looked back. I caught the “volunteer bug.” It is an important part of my life.

I am often asked how I find the time to volunteer. I always respond with, “like anything, if it’s important to you, you can and will find the time.” As I have expanded my family and career, I seek opportunities where I can continue to share my knowledge, skills and experience in ways that do not impede on either. A perfect example of this would be career mentoring. I have been a mentor for a number of organizations including the HRPA, OCISO, and the WCT. I have been fortunate to have great mentors in the past and now pay it forward. I remain connected with many of them.

In my role as VP of HR at Pythian I was thrilled to champion our Love Your Community Programs. These programs provide our employees in over 36 countries with a volunteer day and opportunities for sponsorship – i.e. raising money for causes that are meaningful to them. The programs have allowed Pythian the opportunity to positively impact the communities where our employees live.

Volunteer Week highlights the importance of volunteering in our communities, and showcases the impact it has on the lives of both the volunteer, and the communities they support. What’s holding you back from giving back?

And because it couldn’t be said any better: “We make a living by what we get, but we make a life by what we give.”

Winston Churchill

Categories: DBA Blogs

Optimizer Stew – Parsing the Ingredients to Control Execution Plans

Pythian Group - Fri, 2016-04-15 09:01

No matter how many times I have worked with Outlines, Baselines and Profiles, I keep having to look up reminders as to the differences between these three.

There is seemingly no end to articles to the number of articles and blog that tell you what needs to be licensed, how to use them, and which version of Oracle where each made its debut appearance.

This blog will discuss none of that.  This brief article simply shows the definitions of each from the Glossary for the most current version of the Oracle databases. As of this writing that version is 12.1.0.2.

And here they are.

Stored Outline

A stored outline is simply a set of hints for a SQL statement. The hints in stored outlines direct the optimizer to choose a specific plan for the statement.

Link to Stored Outline in the Oracle Glossary

SQL plan baseline

A SQL baselines is a set of one or more accepted plans for a repeatable SQL statement. Each accepted plan contains a set of hints, a plan hash value, and other plan-related information. SQL plan management uses SQL plan baselines to record and evaluate the execution plans of SQL statements over time.

Link to SQL Plan Baseline in the Oracle Glossary

SQL profile

A SQL profile is a set of auxiliary information built during automatic tuning of a SQL statement. A SQL profile is to a SQL statement what statistics are to a table. The optimizer can use SQL profiles to improve cardinality and selectivity estimates, which in turn leads the optimizer to select better plans.

Link to SQL Profile in the Oracle Glossary

Categories: DBA Blogs

technical diversion - DBaaS Rest APIs

Pat Shuff - Fri, 2016-04-15 02:07
We are going to take a side trip today. I was at Collaborate 2016 and one of the questions that came up was how do you provision 40 database instances for a lab. I really did not want to sit and click through 40 screens and log into 40 accounts so I decided to do a little research. It turns out that there is a relatively robust REST api that allows you to create, read, update, and delete database instances. The DBaaS Rest Api Documentation is a good place to start to figure out how this works.

To list instances that are running in the database service use the following command, note that "c url" should be shortened to remove the space. Dang blogging software! Note to make things easier and to allow us to script creation we define three variables on the command line. I did most of this testing on a Mac so it should translate to Linux and Cygwin. The three variables that we need to create are

  • ODOMAIN - instance domain that we are using in the Oracle cloud
  • OUID - username that we log in as
  • OPASS - password for this instance domain/username
export ODOMAIN=mydomain
export OUID=cloud.admin
export OPASS=mypassword
c url -i -X GET -u $OUID:$OPASS -H "X-ID-TENANT-NAME: $ODOMAIN" -H "Content-Type:application/json" https://dbaas.oraclecloud.com/jaas/db/api/v1.1/instances/$ODOMAIN
What should return is
HTTP/1.1 200 OK
Date: Sun, 10 Apr 2016 18:42:42 GMT
Server: Oracle-Application-Server-11g
Content-Length: 1023
X-ORACLE-DMS-ECID: 005C2NB3ot26uHFpR05Eid0005mk0001dW
X-ORACLE-DMS-ECID: 005C2NB3ot26uHFpR05Eid0005mk0001dW
X-Frame-Options: DENY
X-Frame-Options: DENY
Vary: Accept-Encoding,User-Agent
Content-Language: en
Content-Type: application/json

{"uri":"https:\/\/dbaas.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/metcsgse00027","service_type":"dbaas","implementation_version":"1.0","services":[{"service_name":"test-hp","version":"12.1.0.2","status":"Running","description":"Example service instance","identity_domain":"metcsgse00027","creation_time":"Sun Apr 10 18:5:26 UTC 2016","last_modified_time":"Sun Apr 10 18:5:26 UTC 2016","created_by":"cloud.admin","sm_plugin_version":"16.2.1.1","service_uri":"https:\/\/dbaas.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/metcsgse00027\/test-hp"},{"service_name":"db12c-hp","version":"12.1.0.2","status":"Running","description":"Example service instance","identity_domain":"metcsgse00027","creation_time":"Sun Apr 10 18:1:21 UTC 2016","last_modified_time":"Sun Apr 10 18:1:21 UTC 2016","created_by":"cloud.admin","sm_plugin_version":"16.2.1.1","service_uri":"https:\/\/dbaas.oraclecloud.com:443\/paas\/service\/dbcs\/api\/v1.1\/instances\/metcsgse00027\/db12c-hp"}],"subscriptions":[]}

If you get back anything other than a 200 it means that you have the identity domain, username, or password incorrect. Note that we get back a json structure that contains two database instances that were previously created, test-hp and db12c-hp. Both are up and running. Both are 12.1.0.2 instances. We don't know much more than these but can dive a little deeper by requesting more information by included the service name as part of the request. A screen shot of the deeper detail is shown below.

A list of the most common commands are shown in the screen shot below

The key options to remember are:

  • list: -X GET
  • stop: -X POST --data '{ "lifecycleState" : "Stop" }'
  • restart: -X POST --data '{ "lifecycleState" : "Restart" }'
  • delete: -X DELETE **need to add the instance name at the end, for example db12c-hp in request above
  • create: -X POST --data @createDB.json
In the create option we include a json file that defines everything for the database instance.
{
  "serviceName": "test-hp",
  "version": "12.1.0.2",
  "level": "PAAS",
  "edition": "EE_HP",
  "subscriptionType": "HOURLY",
  "description": "Example service instance",
  "shape": "oc3",
  "vmPublicKeyText": "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAnrfxP1Tn50Rvuy3zgsdZ3ghooCclOiEoAyIl81Da0gzd9ozVgFn5uuSM77AhCPaoDUnWTnMS2vQ4JRDIdW52DckayHfo4q5Z4N9dhyf9n66xWZM6qyqlzRKMLB0oYaF7MQQ6QaGB89055q23Vp+Pk5Eo+XPUxnfDR6frOYZYnpONyZ5+Qv6pmYKyxAyH+eObZkxFMAVx67VSPzStimNjnjiLrWxluh4g3XiZ1KEhmTQEFaLKlH2qdxKaSmhVg7EA88n9tQDWDwonw49VXUn/TaDgVBG7vsWzGWkRkyEN57AhUhRazs0tEPuGI2jXY3V8Q00w3wW38S/dgDcPFdQF0Q== rsa-key-20160107",
  "parameters": [
    {
      "type": "db",
      "usableStorage": "20",
      "adminPassword": "Test123_",
      "sid": "ORCL",
      "pdb": "PDB1",
      "failoverDatabase": "no",
      "backupDestination": "none"
    }
  ]
}
 
The vmPublicKeyText is our id_rsa.pub file that we use to connect to the service. I did not include a backup space but could have. I did not in this example because we have to embed a password in this space and I did not want to show a service with username and password.

Overall, I prefer scripting everything and running this from a command line. Call me old school but sitting for hours and clicking through screens when I can script it and get a notification when it is done appeals to me.

My Oracle Support (MOS) Portal Blog Has Moved

Joshua Solomin - Thu, 2016-04-14 17:12
Please Visit Us at Our New Location

As of Friday, April 15, 2016, the MOS Portal blog has moved to a new location within My Oracle Support Communities.

Come find us here, and be sure to bookmark the new page URL:

https://community.oracle.com/community/support/support-blogs/my-oracle-support-mos-blog.

Going forward please direct any comments and discussion to the new site. Comments left here in the future will not receive responses and will be discarded.

Thank you, and we look forward to seeing you at the new site! 

-The Oracle Software Support Team


The Art & Science of HR—Chicago Style!

Linda Fishman Hoyle - Thu, 2016-04-14 11:49

“I’m not going to talk much about IT,” said Mark Hurd at the beginning of his keynote at Oracle HCM World in Chicago last week. That statement set the tone for the more than 1,400 attendees because HCM World is not a technology conference. Rather it is one where the art and science of HR come together to deliver thought-provoking discussions and revelations, which linger long after the event is over.

Oracle HCM World is a conference where one can get a new professional headshot for LinkedIn, Oracle Talent Profile, or any other tool of choice done with the help of great photographers and MAC makeup artists. It is a conference where a young female entrepreneur, Debbie Sterling of GoldieBlox, can inspire participants with her passion for raising a generation of girls who will later enter STEM professions and deliver the next breakthrough discoveries in science and engineering. It is a conference where Oracle HCM Cloud gets to shine not because it is built on Fusion technology, but because of all the amazing business benefits it delivers.

Back to Mark’s Keynote

Even though a typical corporate mentality is to cut expenses, Hurd said engaged employees are the key to success. That’s what drives productivity.

Rick Bell, Editor, Workforce, wrote this in his article entitled Oracle HCM World: What I Heard From Mark Hurd: “If the baseline of employee engagement is 70 percent and that improves by just a few percentage points, then Hurd explained he has turned higher productivity into millions of dollars.”

And more from Hurd: “Instead of cutting expenses, I drove productivity. It’s sound business. A higher engaged employee will do more work, better work, care more about the business and more about your customers. Engagement is the key to productivity. The higher engagement I have, the better.”

Mark also provided practical tips for what companies can do to increase employee engagement stating that Oracle HCM Cloud is Oracle’s own tool of choice for helping achieve our engagement goals and much more.

Why Did Attendance Increase 22% over 2015?\

For starters, the uptick can be tied to Oracle's growing customer and prospect base. Another reason for the increase is the thought leadership approach we have taken with this conference. Huge kudos go out to Cara Capretta, VP of HCM Transformation Practice and her team; Gretchen Alarcon, GVP of HCM Product Strategy and her team; and our fabulous Marketing team.\

Together they organized a standout event where industry luminaries took center stage. Along with Oracle’s Hurd, Capretta, and Alarcon, other speakers included Peter Cappelli, the George W. Taylor Professor of Management at The Wharton School and Director of Wharton's Center for Human Resources; Adam Grant, Professor of Management, The Wharton School; and Debbie Sterling, Founder and CEO of GoldieBlox.

Reaction to Oracle Learning Cloud

The HCM World buzz went through the roof as attendees got a look at our new Oracle Learning Cloud and a glimpse of what is to come in Work Life Solutions. Oracle HCM Cloud is on a roll.

Safra Catz and Detroit's CFO Woo Finance Execs with Transformation Stories

Linda Fishman Hoyle - Thu, 2016-04-14 11:37
A guest post by Natalia Rachelson, Oracle Cloud Applications (pictured left)

Change is never easy, but every organization must learn to evolve to stay relevant to their customers. That was the message Oracle CEO Safra Catz and Detroit CFO John Hill delivered at the Oracle Modern Finance Experience in Chicago last week.

Not even a cold snap that brought hail and snow to my home town (gasp) could keep people away. Over 400 finance executives turned out for the event, which was two to three times larger than the inaugural Modern Finance conference held last year. Judging by the 1,400 attendees at Oracle HCM World next door, I wouldn’t be surprised if the Modern Finance Experience becomes the not-to-be-missed industry conference for the finance community.

Our marketing team did a fabulous job lining up keynote speakers such as:
  • Michael Lewis, Journalist and Best-Selling Author of The Big Short, Flash Boys, Moneyball, and The Blind Side
  • Geoff Colvin, Senior Editor-at-Large, Fortune Magazine
  • Safra Catz, Oracle CEO
  • James Richards, CIO of GE Healthcare
  • John Hill, CFO, City of Detroit
Topics included the frictionless economy and the brand new business ideas that it generates; the importance of numbers in the era of Big Data; what it takes to transform a $40 billion company not once, but twice; and how to save a city that has hemorrhaged one million residents over the past 40 years. These were stories about business transformation, success, and survival, not presentations on the technical underpinnings of the cloud or the merits of multi-tenancy (hint: business people don’t really care). If someone talked about the cloud, it was as a means to an end.

Here are highlights from two keynotes that really stood out for me:

Safra Catz, Oracle CEO

Safra recalled the first transformation Oracle went through in the early 2000s and how it allowed us to survive the dot-com crash. Now in our second transformation, Safra called upon finance executives to forge ahead with their own transformation efforts while acknowledging human nature. “Even those who understand that change is absolutely necessary still have a hard time when change happens to them,” warned Safra.

With that in mind, change agents can navigate tough transformational waters and find other like-minded individuals in the organization to help them make the case for change. By automating processes and using intelligence built into the Oracle Cloud, companies can become smarter and more efficient. They’ll be able to deal with any disruption, whether it’s a weakening global economy, a new business model (i.e. Uber/VRBO-style sharing), or new technology such as Amazon’s one-click ordering (now being transformed into one-push ordering without having to go online).

Safra recounted that back in the early 2000s, Larry Ellison said Oracle would save one billion dollars by using its own products. As crazy as that sounded, Oracle ended up achieving the goal. Safra imparted this advice to the audience: "Set overly ambitious goals that seem unachievable and unleash your people on them. You will be surprised by what is actually possible."

John Hill, CFO of the City of Detroit

John blew the audience away with this comeback story. Detroit, once the symbol of the American dream and now crumbling under years of mismanagement and tectonic shifts in the automotive industry, filed for bankruptcy in 2013. It was the largest city to do so in US history.

John had previously saved Washington D.C. from financial woes and came to Detroit to oversee the bankruptcy process. He made a deal with the mayor to let him hire and fire city employees at will, which is almost unheard of in government. He also secured the right for him and the CIO to select a technology solution to run the city. They chose Oracle ERP Cloud, and even though the change process was extremely painful, the City of Detroit is now turning a corner. New businesses and new industries are moving into the city’s revitalized downtown, bringing with them a younger and better educated workforce, and hope for a brighter future.

Together We Win

CFOs appreciate these pragmatic accounts of leading change, as well as the special attention given to them at the event. Oracle executives Safra Catz, Mark Hurd, and Larry Ellison invited 100 VIPs to an evening at the Chicago Field Museum, amidst displays of Chinese terracotta warriors. The presence of Oracle’s entire senior management team demonstrates our commitment to becoming #1 in SaaS, and in ERP Cloud specifically.

How RDX Remote DBAs Stay on Top of Evolving Technologies

Chris Foot - Thu, 2016-04-14 08:00

As the modern database continues to grow, its inherent feature set expands alongside it. More solutions allow remote DBAs and internal teams alike to solve business problems tied to cutting costs and improving efficiencies. However, with these solutions comes increased complexity surrounding database administration, and it falls on the shoulders of DBAs to understand and leverage the industry’s evolving technologies.

Partner Webcast – Transition to the New Integration Model with Oracle SOA Cloud Service

Do you want to fully integrate your enterprise, using the same integration tool and skills for both cloud and on premises deployment? Oracle’s hybrid integration platform allows you to extract...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Tomcat on Azure

Pat Shuff - Thu, 2016-04-14 02:07
Today we are going to install Tomcat on Microsoft Azure. In the past three days we have installed Tomcat on Oracle Linux using Bitnami and onto a raw virtual image as well as on Amazon AWS using a raw virtual image. Microsoft does not really have a notion of a MarketPlace like the AWS Commercial or Public Domain AMI Markets. It does have Bitnami and we could go through the installation on Azure just like we did the Oracle Compute Cloud. Rather than repeating on yet another platform, let's do something different and look at how we would install Tomcat on Windows on Azure. The Linux installation would be no different than the Oracle Linux raw virtual machine install so let's do something different. You can find Tomcat on Linux Instructions or Tomcat on Windows Instructions. To be honest we won't deviate much from the second one so follow this or follow the instructions from Microsoft, they are basically the same.

The steps that we need to follow are

  • Create a virtual machine with Windows and Java enabled
  • Download and install Tomcat
  • open the ports on the Azure portal
  • open the ports on Windows
We start by loading a virtual machine in the Azure portal. Doing a search for Tomcat returns the Bitnami image as well as a Locker Tomcat Container. This might work but it does not achieve our desire for this exercise. We might want to look at a Container but for our future needs we need to be able to connect to a database and upload jar and war files. I am not sure that a Container will do this.

We search for a JDK and find three different versions. We select the JDK 7 and click Create.

In creating the virtual machine, we define a name for our system, a default login, a password (I prefer a confirmation on the password rather than just entering it once), our default way of paying, and where to place it is storage and which data center based on the storage we select. We go with the default East configuration and click OK.

Since we are cheap and this is only for demo purposes, we will select A0 Standard. The recommended is A1 Standard but it is $50 more per month and again this is only for demo purposes. After having played with the A0 Standard, we might be better off going with the A1 Standard. Yes, it is more expensive. The speed of the A0 shape is so painful that it is almost unusable.

We will want to open up ports 80, 8080, and 443. These will all be used for Tomcat. This can be done by creating an new security rule and adding port exceptions when we create the virtual machine. We can see this in the installation menu.

We add these ports and can click Create to provision the virtual machine

One of the things that I don't like about this configuration is that we have three additional ports that we want to add. When we add them we don't see the last two rules. It would be nice if we could see all of the ports that we define. We also need to make sure that we have a different priority for the port definition. The installation will fail if we assign priority 1000 to all of the ports.

Connection to the virtual machine is done through remote desktop. If you go to the portal and click on the virtual machine you will be able to connect to the console. I personally don't like connecting to a gui interface but prefer a command line interface. You must connect with a username and password rather than a digital certificate.

The first thing that comes up with Windows 2012 server is the server management screen. You can use this to configure the compute firewall and allow ports 80, 8080, and 443 to go to the internet. This also requires going to the portal to enable these ports as network rules. You have two configurations that you need to make to enable port 8080 to go from your desktop, through the internet, get routed to your virtual machine, then into your tomcat application.

For those of you that are Linux and Mac snobs, getting Windows to work in Azure was a little challenging. Simple things like opening a browser became a little challenging. This is more a lack of Windows understanding. To get Internet Explorer to come up you first have to move your mouse into the far right of the screen.

At first it did not work for me because the Windows screen was a little larger than my desktop and I had to scroll all the way to the bottom and all the way to the right before the pop up navigation window comes up. When the window does come up you see three icons. The bottom icon is the configuration that allows you to get the the Control Panel to configure the firewall. The icon above it is the Microsoft Windows icon which gives you an option to launch IE. Yes, I use Windows on one of my desktops. Yes, I do have an engineering degree. No, I don't get this user interface. Hovering over an empty spot on the screen (which is behind a scroll bar) makes no sense to me.

From this point forward I was able to easily follow the Microsoft Tomcat installation instructions. If you don't select the JDK 7 Virtual Machine you can download it from java.com download. You then download the Tomcat app server. We selected Tomcat 7 for the download and followed the defaults. We do need to configure the firewall on the Windows server to enable ports 80, 8080, and 443 to see everything from our desktop browser. We can first verify that Tomcat is properly installed by going to http://localhost:8080 from Internet Explorer in the virtual image. We can then get the ip address of our virtual machine and test the network connections from our desktop by replacing localhost with the ip address. Below are the screen shots from the install. I am not going to go through the instructions on installing Tomcat because it is relatively simple with few options but included the screen shots for completeness.

In Summary, we could have followed the instructions from Microsoft to configure Tomcat. We could pre-configure the ports as suggested in this blog. We could pre-load the JDK with a virtual machine rather than manually downloading it. It took about 10-15 minutes to provision the virtual machine. It then took 5-10 minutes to download the JDK and Tomcat components. It took 5-10 minutes to configure the firewall on Windows and the port access through the Azure portal. My suggestion is to use a service like Bitnami to get a preconfigured system because it takes about half the time and enables all of the ports and services automatically.

Log Buffer #469: A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2016-04-13 09:40

This Log Buffer Edition digs deep into the realms of Oracle, SQL Server and MySQL and brings together a few of the top blog posts.

Oracle

We’ve all encountered a situation when you want to check a simple query or syntax for your SQL and don’t have a database around. Of course, most of us have at least a virtual machine for that, but it takes time to fire it up, and if you work from battery, it can leave you without power pretty quickly.

View Criteria is set to execute in Database mode by default. There is option to change execution mode to Both. This would execute query and fetch results from database and from memory.  Such query execution is useful, when we want to include newly created (but not committed yet) row into View Criteria result. Newly created row will be included into View Criteria resultset.

Upgrading database hardware in an organization is always a cumbersome process. The most time consuming step is, planing for the upgrade, which mainly includes choosing right hardware for your Oracle databases. After deciding on the hardware type for your databases, rest will be taken care by technical teams involved.

Gluent New World #02: SQL-on-Hadoop with Mark Rittman

The pre-12c implementation of DCD used TNS packages to “ping” the client and relied on the underlying TCP stack which sometimes may take longer. Now in 12c this has changed and DCD probes are implemented by TCP Stack. The DCD probes will now use the TCP KEEPALIVE socket.

 

SQL Server

Snippets will allow you to code faster by inserting chunks of code with few key strokes.

One of more common concerns among database administrators who consider migrating their estate to Azure SQL Database is their ability to efficiently manage the migrated workloads.

A SQL Server Patching Shortcut

Move an Existing Log Shipping Database to a New Monitor Server

Knee-Jerk Performance Tuning : Incorrect Use of Temporary Tables

 

MySQL

MySQL 5.7 sysbench OLTP read-only results: is MySQL 5.7 really faster?

7 Galera Cluster presentations in Percona Live Santa Clara 18-21.4. Meet us there!

Generate JSON Data with dbForge Data Generator for MySQL v1.6!

Extending the SYS schema to show metadata locks

EXPLAIN FORMAT=JSON wrap-up

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator