Feed aggregator

PeopleSoft PUBLIC User Security

PeopleSoft Public users are not required to authenticate (sign on). These are generic accounts created for specific purposes, for example informational pages and/or company directories. Public users are also not subject to timeouts (session inactivity). Because no authentication is required, no sensitive data should be accessible to these users. It also goes without saying, that if you don’t need Public accounts, don’t use them.

When performing a PeopleSoft security audit, Integrigy identifies Public users and analyzes their authorization privileges. To do this yourself, use the SQL below to list your public users and then query the application or database to look at their authorization privileges.

--List the public users
SELECT O.OPRID, O.OPRDEFNDESC, O.ACCTLOCK, O.LASTPSWDCHANGE, O.FAILEDLOGINS,O.ENCRYPTED, O.EMPLID
FROM SYSADM.PSWEBPROFILE P, SYSADM.PSOPRDEFN O
WHERE P.BYPASSSIGNON = 'Y'
AND P.DEFAULTUSERID = O.OPRID;

If you have questions, please contact us at info@integrigy.com

Michael A. Miller, CISSP-ISSMP, CCSP

References

PeopleSoft Security Quick Reference

Auditing, Oracle PeopleSoft
Categories: APPS Blogs, Security Blogs

What's new in IaaS

Pat Shuff - Mon, 2016-08-01 02:07
After taking the month of July off (don't ask) we are going to restart our discussion in August with a focus on infrastructure. The best place to start this discussion is to look at what has changed in June and July. The best place to do this is the documentation what's new page. If we look at this page we see that the following are new features Over the next few weeks we will dive into some of these new changes and look at how to capitalize on these features. We will compare and contrast these features with features available from Amazon and Azure as usual. We will start with Orchestrations since this is probably the biggest impact item for provisioning services as well as scaling and controlling instances. The goal of August is to dive deeper into the infrastructure to look at not only how you can leverage this in the Oracle Cloud but utilize it with the Oracle Cloud Machine if you get one installed in your data center. All of the features and functionality detailed in the IaaS updates are made available a month or two after release in the public cloud in the Oracle Cloud Machine.

File Upload Improvements in Apex 5.1

Jeff Kemp - Sun, 2016-07-31 22:28

Warning: this is based on the Apex 5.1 Early Adopter and details may change.

file_upload_5_1_ea

The standard File Upload item type is getting a nice little upgrade in Apex 5.1. By simply changing attributes on the item, you can allow users to select multiple files (from a single directory) at the same time.

In addition, you can now restrict the type of file they may choose, according to the MIME type of the file, e.g. image/jpg. This file type restriction can use a wildcard, e.g. image/*, and can have multiple patterns separated by commas, e.g. image/png,application/pdf.

file_upload_5_1_ea_demo

Normally, to access the file that was uploaded you would query APEX_APPLICATION_TEMP_FILES with a predicate like name = :P1_FILE_ITEM. If multiple files are allowed, however, the item will be set to a comma-delimited list of names, so the suggested code to get the files is:

declare
  arr apex_global.vc_arr2;
begin
  arr := apex_util.string_to_table(:P1_MULTIPLE_FILES);
  for i in 1..arr.count loop
    select t.whatever
    into   your_variable
    from   apex_application_temp_files t
    where  t.name = arr(i);
  end loop;
end;

You can play with a simple demo here: https://apexea.oracle.com/pls/apex/f?p=UPLOAD_DEMO&cs=JK64 (login as demo / demodemo).

If you want to support drag-and-drop, image copy&paste, load large files asynchronously, or restrict the maximum file size that may be uploaded, you will probably want to consider a plugin instead, like Daniel Hochleitner’s DropZone.


Filed under: APEX Tagged: APEX, apex-5.1

Test Driven Development and PL/SQL – The Odyssey Begins

The Anti-Kyte - Sun, 2016-07-31 14:58

In the aftermath of the Brexit vote, I’m probably not alone in being a little confused.
Political discourse in the UK has focused on exactly who it was who voted to Leave.
The Youth Spokesperson you get on a lot of political programs right now, will talk accusingly of older voters “ruining” their future by opting to Leave.
Other shocked Remainers will put it down to people without a University Degree.
I’m not sure where that leaves me as someone who is ever so slightly over the age of 30, does not have a degree…and voted to Remain. I’m pretty sure I’m not Scottish…unless there’s some dark family secret my parents haven’t let me in on.
I suppose I must be a member of the “Metropolitan Elite” the Leave side was always muttering darkly about.
After all, I do pay a great deal of money to be driven from my country residence to London to work every day…although I do have to share the train with the odd one or two fellow elitists who’ve made the same extravagant choice.
This does of course assume that Milton Keynes qualifies as being “in the country” and that my living there is a matter of choice rather than a question of being able to afford living any closer to London.

With all the excrement…er…excitement of the Referendum Campaign and it’s aftermath, I somehow never got around to writing my application to track the progress of the Euros (or the Copa America for that matter).
Whenever a major football tournament comes around, I always resolve to do this, if only to evoke memories of my youth when a large part of my bedroom wall was taken up with a World Cup Wallchart where you could fill in the results as they happened. That’s without mentioning the months leading up to the tournament and trying to complete the Panini collection – the only time you’d ever hear a conversation such as “OK, I’ll let you have Zico in exchange for Mick Mills”.

In order to prevent this happening again, I’ve resolved to write an application capable of holding details of any major international football tournament.
In the course of writing this application, I’d like to take the opportunity to have a look at an aspect of PL/SQL development that maybe isn’t as commonly used as it should be – Unit Testing.

Over the next few weeks, I plan to take a look at some of the Testing Frameworks available for PL/SQL and see how they compare.
The objective here is not so much to find which framework is the best/most suitable, but to perform an objective comparison between them using the same set of tests which implement fairly commonly encountered functionality.

If you’re looking for recommendations for a framework, then this article by Jacek Gebal is probably a good place to start.

In this post, I’ll be outlining the functionality that I’ll be testing in the form of User Stories, together with the application data model (or at least, the bit of it I need to execute the tests).
I’ll also have a look at the common pattern that tests written in these frameworks tend to follow.
Just to highlight why using a Test Framework might be useful, I’ll also script a couple of simple tests in SQL to see just how much code you have to write to implement tests without using a framework.

Unit Testing Approach

Taking the purist approach to Test-Driven Development, we’d need to :

  1. Write the test first and ensure that it fails
  2. Write the minimum amount of code required for the test to pass
  3. Run the test to make sure it passes

Additionally, we’d need to make sure that the tests were independent of each other – that the execution of one test is not dependent on the successful execution of a prior test.

Following this approach to the letter would cause one or two issues.
Firstly, if the procedure your testing does not exist, your test will not run and fail. It will error.
As the point of this step is, essentially, to ensure that your test code is sound (i.e. it won’t pass irrespective of the code it runs against), this is not what we’re after.
The second issue is specific to PL/SQL.
When defining PL/SQL procedures that interact with database tables, it’s usually a good idea to use anchored declarations where appropriate.
Even if we write a stub procedure, if the tables it will interact with do not exist, we’d have to use native types for our parameters and update the signature of the procedure once the tables had been created.
There is always the danger that this additional step would be missed.

So, in terms of PL/SQL then, I’d suggest that the pre-requisites for writing our test are :

  • The data model components (tables, views, RI constraints) that the procedure will interact with
  • A stub procedure with correctly typed parameters

Many testing frameworks seem to adopt four basic steps for each test, some of which are optional. They are :

  1. Setup – put the application into a known state from which the test can run
  2. Execute – run the code that you want to test
  3. Verify – check that what you expected to happen did happen
  4. Teardown – return the system to the state it was in before this test was run

This is the general pattern that I’ll be following for my tests.

The Stories to Test

My story finding is complete and the backlog has been populated.
The stories selected for Sprint 1 are on the board. OK, they’ve been chosen so that they cover some of the more common scenarios that we might need to test.
The stories are :

  1. Add a competition – tests insert into a table with a Primary Key
  2. Add a Tournament – insert into a table with a Foreign Key constraint
  3. Remove Tournament – delete from a table
  4. Remove Competitions – delete from a Parent table
  5. Edit a Tournament – update of a record in a table
  6. View Tournaments by Competition – select multiple records using an in/out ref cursor
  7. Bulk Upload Competitions – insert records in bulk using LOG ERRORS
  8. Add a Team to a Tournament – insert a record using a synthetic foreign key

Each of these stories have multiple Acceptance Criteria.
It’s worth noting that, as some of the functionality (i.e. the data integrity) is implemented in the data model ( Primary Keys, Foreign Keys etc), the Acceptance Criteria for these stories needs to cover this as well as the functionality implemented in the PL/SQL code itself.

The Data Model

I’ve taken the approach that, for a story to be Sprint Ready, the Data Model to support it must already be in place.
Currently, the data model looks like this :

sprint1_data_model

The DDL to create the application owner is :

create user footie identified by password
/

alter user footie default tablespace USERS
/

grant create session, create table, create procedure, create sequence to footie
/

alter user footie quota unlimited on users
/

grant read, write on directory my_files to footie
/

…where password is the password you want to give this user.
Note that a directory called MY_FILES already exists in my database.

The DDL to create the Data Model includes the COMPETITIONS table…

create table competitions
(
    comp_code varchar2(5) constraint comp_pk primary key,
    comp_name varchar2(50) not null,
    description varchar2(4000)
)
/

comment on table competitions is
    'International Football competitionss for which tournament data can be added'
/

comment on column competitions.comp_code is
    'Internal code to uniquely identify this competitions'
/

comment on column competitions.comp_name is
    'The name of the competitions'
/

comment on column competitions.description is
    'A description of the competitions'
/

…an external table to facilitate the bulk upload of COMPETITIONS…

create table competitions_xt
(
    comp_code varchar2(5),
    comp_name varchar2(50),
    description varchar2(4000)
)
    organization external
    (
        type oracle_loader
        default directory my_files
        access parameters
        (
            records delimited by newline
            skip 1
            fields terminated by ','
            badfile 'competitions.bad'
            logfile 'competitions.log'
            (
                comp_code char(5),
                comp_name char(50),
                description char(4000)
            )
        )
            location('competitions.csv')
    )
reject limit unlimited
/

….the TOURNAMENTS table…

create table tournaments
(
    id number constraint tourn_pk primary key,
    comp_code varchar2(5),
    year_completed number(4) not null,
    host_nation varchar2(100),
    year_started number(4),
    number_of_teams number(3) not null,

    constraint tourn_uk unique( comp_code, year_completed, host_nation)
)
/

comment on table tournaments is
    'A (finals) tournament of an International Football competition. Table alias is tourn'
/

comment on column tournaments.id is
    'Synthetic PK for the table as the Natural Key includes host_nation, which may be null. Values taken from sequence tourn_id_seq'
/
comment on column tournaments.comp_code is
    'The Competition that this tournament was part of (e.g. World Cup). Part of the Unique Key. FK to COMPETITIONS(comp_code)'
/

comment on column tournaments.year_completed is
    'The year in which the last match of this tournament took place. Manatory. Part of Unique Key'
/

comment on column tournaments.host_nation is
    'The nation where the tournament was played (if a finals tournament). Part of Unique Key but is optional'
/
comment on column tournaments.year_started is
    'The year in which the first match was played ( if applicable). Cannot be later than the value in YEAR_COMPLETED'
/

comment on column tournaments.number_of_teams is
    'The number of teams taking part in the tournament'
/

…the TOURNAMENT_TEAMS table…

create table tournament_teams
(
    tourn_id number,
    team_name varchar2(100),
    group_stage varchar2(2),

    constraint team_pk primary key (tourn_id, team_name)
)
/

comment on table tournament_teams is
    'Teams participating in the Tournament. Alias is TEAM'
/

comment on column tournament_teams.tourn_id is
    'The ID of the tournament the team is participating in. Foreign Key to TOURNAMENTS(id).'
/

comment on column tournament_teams.team_name is
    'The name of the participating team'
/

comment on column tournament_teams.group_stage is
    'If the tournament has an initial group stage, the group identifier that the team is drawn in'
/

…this being 11g, a sequence to provide the values for the TOURNAMENT.ID synthetic key (in 12c you can define this as part of the table ddl)…

create sequence tourn_id_seq
/

… a Foreign Key constraint from TOURNAMENTS to COMPETITIONS….

alter table tournaments
    add constraint tourn_comp_fk foreign key
        (comp_code) references competitions(comp_code)
/

…and a Foreign Key from TOURNAMENT_TEAMS to TOURNAMENTS…

alter table tournament_teams
    add constraint team_tourn_fk foreign key
        (tourn_id) references tournaments(id)
/

I’ve kept the Foreign Keys in separate files to make the initial deployment of the application simpler. By doing this, I can create the tables in any order without worrying about RI constraints. I can then add these as a separate step after all of the tables have been created.
The tables’ non-RI constraints (Primary, Unique Keys, Not Null constraints etc.) are created along with the table.

One other point to note is that I know there are one or two issues with the first-cut of the DDL above. This is so that I can see how well the tests I write highlight these issues.
As we know, before we begin writing a test, we’ll need to have a stub procedure for it to compile against.

The first of these is :

create or replace package footie.manage_competitions
as
    procedure add_competition
    (
        i_code footie.competitions.comp_code%type,
        i_name footie.competitions.comp_name%type,
        i_desc footie.competitions.description%type
    );
end manage_competitions;
/

create or replace package body footie.manage_competitions
as
    procedure add_competition
    (
        i_code footie.competitions.comp_code%type,
        i_name footie.competitions.comp_name%type,
        i_desc footie.competitions.description%type
    )
    is
    begin
        null;
    end add_competition;
end manage_competitions;
/
Scripting the Tests

I want to write a test script for my first story – Add a Competition.
There are two Acceptance Criteria that I need to test :

  • A new competition is added
  • A competition cannot be added more than once

That’s pretty simple, so the test should be fairly straight-forward. Using a SQL script, the first test would probably look something like this :

set serveroutput on size unlimited
declare
    l_result varchar2(4);
    l_err_msg varchar2(4000) := null;

    l_rec_count pls_integer;
    l_code footie.competitions.comp_code%type;
    l_name footie.competitions.comp_name%type := 'UT1 Comp';
    l_desc footie.competitions.description%type := 'Test';

    l_counter pls_integer := 1;

begin
    -- Setup - make sure that the competition we're using for the test does not already exist
    select 'UT'||numeral
    into l_code
    from
    (
        select max( to_number( substr( comp_code, regexp_instr( comp_code, '[[:digit:]]')))) + 1 as numeral
        from footie.competitions
        where comp_code like 'UT%'
        and regexp_instr( substr( comp_code, -1, 1), '[[:digit:]]') = 1
        union
        select 1 from dual
    )
    where numeral is not null;
    -- execute - nested block so that we can handle/report any exceptions
    begin
        footie.manage_competitions.add_competition(l_code, l_name, l_desc);
        l_result := 'PASS';
    exception
        when others then
            l_err_msg := dbms_utility.format_error_backtrace;
            l_result := 'FAIL';
    end; -- execute block
    -- validate
    if l_result = 'PASS' then
        select count(*) into l_rec_count
        from footie.competitions
        where comp_code = l_code
        and comp_name = l_name
        and description = l_desc;

        if l_rec_count != 1 then
            l_result := 'FAIL';
            l_err_msg := 'Record not added';
        end if;
    end if;
    -- teardown
    rollback;
    -- Display Result
    dbms_output.put_line('Add Competition : '||l_result);
    if l_result = 'FAIL' then
        dbms_output.put_line(l_err_msg);
    end if;
end;
/

If we run this, we’d expect it to fail, as things stand :

PL/SQL procedure successfully completed.

Add Competition : FAIL
Record not added

Our second test will probably look like this :

set serveroutput on size unlimited
declare
    l_result varchar2(4);
    l_err_msg varchar2(4000) := null;
    l_err_code number;

    l_rec_count pls_integer;
    l_code footie.competitions.comp_code%type := 'UT2';
    l_name footie.competitions.comp_name%type := 'UT2 Comp';
    l_desc footie.competitions.description%type := null;

    l_counter pls_integer := 1;

begin
    -- Setup - make sure that the competition we're using for the test exists
    merge into footie.competitions
    using dual
    on (comp_code = l_code)
    when not matched then
    insert( comp_code, comp_name, description)
    values(l_code, l_name, l_desc);

    -- execute - nested block so that we can handle/report any exceptions
    begin
        footie.manage_competitions.add_competition(l_code, l_name, l_desc);
        l_result := 'FAIL';
    exception
        when others then
            l_err_msg := dbms_utility.format_error_backtrace;
            l_result := 'PASS';
    end; -- execute block
    -- validate
    if l_result = 'PASS' then
        select count(*) into l_rec_count
        from footie.competitions
        where comp_code = l_code
        and comp_name = l_name
        and nvl(description, 'X') = nvl(l_desc, 'X');

        if l_rec_count > 1 then
            l_result := 'FAIL';
            l_err_msg := 'Duplicate record has been added';
        end if;
    end if;
    -- teardown
    rollback;
    -- Display Result
    dbms_output.put_line('Add Competition : '||l_result);
    if l_result = 'FAIL' then
        dbms_output.put_line(l_err_msg);
    end if;
end;
/

That looks rather similar to our first test. Furthermore, the two test scripts combined add up to quite a lot of code.
At least when we run it, it fails as expected…

PL/SQL procedure successfully completed.

Add Competition : FAIL

The next step is to write the code to pass the tests…

create or replace package body manage_competitions
as
	procedure add_competition
	(
		i_code competitions.comp_code%type,
		i_name competitions.comp_name%type,
		i_desc competitions.description%type default null
	)
	is
	begin
        insert into competitions( comp_code, comp_name, description)
        values( i_code, i_name, i_desc);
	end add_competition;
end manage_competitions;
/

Now when we run our tests we get …

PL/SQL procedure successfully completed.

Add Competition : PASS

PL/SQL procedure successfully completed.

Add Competition : PASS

Now, I could carry on writing SQL scripts for Acceptance Criteria for all of the other stories in the sprint, but I think you get the idea.
Writing tests this way requires a large amount of code, much of which is being replicated in separate tests.

Of course, you could go down the route of moving some of your repeating test routines into PL/SQL packages and deploying your test code along with your application code to non-production environments.
Before going to all of that extra effort though, it’s probably worth checking to see if there’s a framework out there that can help reduce the testing burden.
Fortunately, I’ve got a couple of years until the next major international tournament so, I’ll be taking some time to do just that.
So, tune in next week ( or sometime soon after) when I’ll be taking a look at the first of these frameworks…SQLDeveloper Unit Testing


Filed under: Oracle, PL/SQL, SQL Tagged: external table, foreign key, Test Driven Development, unique key

Notes on Spark and Databricks — technology

DBMS2 - Sun, 2016-07-31 09:30

During my recent visit to Databricks, I of course talked a lot about technology — largely with Reynold Xin, but a bit with Ion Stoica as well. Spark 2.0 is just coming out now, and of course has a lot of enhancements. At a high level:

  • Using the new terminology, Spark originally assumed users had data engineering skills, but Spark 2.0 is designed to be friendly to data scientists.
  • A lot of this is via a focus on simplified APIs, based on
    • Unlike similarly named APIs in R and Python, Spark DataFrames work with nested data.
    • Machine learning and Spark Streaming both work with Spark DataFrames.
  • There are lots of performance improvements as well, some substantial. Spark is still young enough that Bottleneck Whack-A-Mole yields huge benefits, especially in the SparkSQL area.
  • SQL coverage is of course improved. For example, SparkSQL can now perform all TPC-S queries.

The majority of Databricks’ development efforts, however, are specific to its cloud service, rather than being donated to Apache for the Spark project. Some of the details are NDA, but it seems fair to mention at least:

  • Databricks’ notebooks feature for organizing and launching machine learning processes and so on is a biggie. Jupyter is an open source analog.
  • Databricks has been working on security, and even on the associated certifications.

Two of the technical initiatives Reynold told me about seemed particularly cool. One, on the machine learning side, was a focus on training models online as new data streams in. In most cases this seems to require new algorithms for old model types, with a core idea being that the algorithm does a mini gradient descent for each new data point.

The other cool idea fits the trend of alternatives to the “lambda architecture”. Under the name “structured streaming”, which seems to be a replacement for “DStreaming”, the idea is to do set-based SQL processing even though membership of the set changes over time. Result sets are extracted on a snapshot basis; you can keep either all the results from each snapshot query or just the deltas.

Despite all this, there’s some non-trivial dissatisfaction with Spark, fair or otherwise.

  • Some of the reason is that SparkSQL is too immature to be great.
  • Some is annoyance that Databricks isn’t putting everything it has into open source.
  • Some is that everything has its architectural trade-offs.

To the last point, I raised one of the biggest specifics with Reynold, namely Spark’s lack of a strong built-in data persistence capability. Reynold’s answer was that they’re always working to speed up reading and writing from other forms of persistent storage. E.g., he cited a figure of ~100 million rows/core/second decoded from Parquet.

Categories: Other

Notes on Spark and Databricks — generalities

DBMS2 - Sun, 2016-07-31 09:29

I visited Databricks in early July to chat with Ion Stoica and Reynold Xin. Spark also comes up in a large fraction of the conversations I have. So let’s do some catch-up on Databricks and Spark. In a nutshell:

  • Spark is indeed the replacement for Hadoop MapReduce.
  • Spark is becoming the default platform for machine learning.
  • SparkSQL (nee’ Shark) is puttering along predictably.
  • Databricks reports good success in its core business of cloud-based machine learning support.
  • Spark Streaming has strong adoption, but its position is at risk.
  • Databricks, the original authority on Spark, is not keeping a tight grip on that role.

I shall explain below. I also am posting separately about Spark evolution, especially Spark 2.0. I’ll also talk a bit in that post about Databricks’ proprietary/closed-source technology.

Spark is the replacement for Hadoop MapReduce.

This point is so obvious that I don’t know what to say in its support. The trend is happening, as originally decreed by Cloudera (and me), among others. People are rightly fed up with the limitations of MapReduce, and — niches perhaps aside — there are no serious alternatives other than Spark.

The greatest use for Spark seems to be the same as the canonical first use for MapReduce: data transformation. Also in line with the Spark/MapReduce analogy: 

  • Data-transformation-only use cases are important, but they don’t dominate.
  • Most other use cases typically have a data transformation element as well …
  • … which has to be started before any other work can be done.

And so it seems likely that, at least for as long as Spark is growing rapidly, data transformation will appear to be the biggest Spark use case.

Spark is becoming the default platform for machine learning.

Largely, this is a corollary of:

  • The previous point.
  • The fact that Spark was originally designed with machine learning as its principal use case.

To do machine learning you need two things in your software:

  • A collection of algorithms. Spark, I gather, is one of numerous good alternatives there.
  • Support for machine learning workflows. That’s where Spark evidently stands alone.

And thus I have conversations like:

  • “Are you doing anything with Spark?”
  • “We’ve gotten more serious about machine learning, so yes.”

SparkSQL (nee’ Shark) is puttering along.

SparkSQL is pretty much following the Hive trajectory.

  • Useful from Day One as an adjunct to other kinds of processing.
  • A tease and occasionally useful as a SQL engine for its own sake, but really not very good, pending years to mature.

Databricks reports good success in its core business of cloud-based machine learning support.

Databricks, to an even greater extent than I previously realized, is focused on its cloud business, for which there are well over 200 paying customers. Notes on that include:

  • As you might expect based on my comments above, the majority of usage is for data transformation, but a lot of that is in anticipation of doing machine learning/predictive modeling in the near future.
  • Databricks customers typically already have their data in the Amazon cloud.
  • Naturally, a lot of Databricks customers are internet companies — ad tech startups and the like. Databricks also reports “strong” traction in the segments:
    • Media
    • Financial services (especially but not only insurance)
    • Health care/pharma
  • The main languages Databricks customers use are R and Python. Ion said that Python was used more on the West Coast, while R was used more in the East.

Databricks’ core marketing concept seems to be “just-in-time data platform”. I don’t know why they picked that, as opposed to something that emphasizes Spark’s flexibility and functionality.

Spark Streaming’s long-term success is not assured.

To a first approximation, things look good for Spark Streaming.

  • Spark Streaming is definitely the leading companion to Kafka, and perhaps also to cloud equivalents (e.g. Amazon Kinesis).
  • The “traditional” alternatives of Storm and Samza are pretty much done.
  • Newer alternatives from Twitter, Confluent and Flink aren’t yet established.
  • Cloudera is a big fan of Spark Streaming.
  • Even if Spark Streaming were to generally decline, it might keep substantial “good enough” usage, analogously to Hive and SparkSQL.
  • Cool new Spark Streaming technology is coming out.

But I’m also hearing rumbles and grumbles about Spark Streaming. What’s more, we know that Spark Streaming wasn’t a core part of Spark’s design; the use case just happened to emerge. Demanding streaming use cases typically involve a lot of short-request inserts (or updates/upserts/whatever). And if you were designing a system to handle those … would it really be based on Spark?

Databricks is not keeping a tight grip on Spark leadership.

For starters:

  • Databricks’ main business, as noted above, is its cloud service. That seems to be going well.
  • Databricks’ secondary business is licensing stuff to Spark distributors. That doesn’t seem to amount to much; it’s too easy to go straight to the Apache distribution and bypass Databricks. No worries; this never seemed it would be a big revenue opportunity for Databricks.

At the moment, Databricks is pretty clearly the general leader of Spark. Indeed:

  • If you want the story on where Spark is going, you do what I did — you ask Databricks.
  • Similarly, if you’re thinking of pushing the boundaries on Spark use, and you have access to the Databricks folks, that’s who you’ll probably talk to.
  • Databricks employs ~1/3 of Spark committers.
  • Databricks organizes the Spark Summit.

But overall, Databricks doesn’t seem to care much about keeping Spark leadership. Its marketing efforts in that respect are minimal. Word-of-mouth buzz paints a similar picture. My direct relationship with the company gives the same impression. Oh, I’m sure Databricks would like to remain the Spark leader. But it doesn’t seem to devote much energy toward keeping the role.

Related links

Starting with my introduction to Spark, previous overview posts include those in:

Categories: Other

Terminology: Data scientists vs. data engineers

DBMS2 - Sun, 2016-07-31 09:10

I learned some newish terms on my recent trip. They’re meant to solve the problem that “data scientists” used to be folks with remarkably broad skill sets, few of whom actually existed in ideal form. So instead now it is increasingly said that:

  • “Data engineers” can code, run clusters, and so on, in support of what’s always been called “data science”. Their knowledge of the math of machine learning/predictive modeling and so on may, however, be limited.
  • “Data scientists” can write and run scripts on single nodes; anything more on the engineering side might strain them. But they have no-apologies skills in the areas of modeling/machine learning.

Related link

Categories: Other

Partner Webcast – Enterprise Database Backup on Oracle Cloud

With today’s 24x7 always-on business pace, failure is simply not an option. SLAs are getting more demanding and application owners are struggling realizing that data protection is arguably one of the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Exadata X-5 Bare Metal vs. OVM load testing

Yann Neuhaus - Sun, 2016-07-31 02:49

In a previous post I tried to compare a single thread workload between Exadata X5 Bare Metal and Virtualized. The conclusions were that there is no huge differences, and that this kind of comparison is not easy.

About the comparison is not easy, some reasons have been nicely detailed by Lonny Niederstadt in this twitter thread

Besides the single thread tests, I did a test with 50 sessions doing updates on a small data set. It’s a 50 session SLOB with SCALE=100M WORK_UNIT=64 UPDATE_PCT=100 and a small buffer cache.

Here are the load profiles, side by side:

Bare Metal Virtualized
 
Load Profile Per Second Per Transaction Per Second Per Transaction
~~~~~~~~~~~~~~~ --------------- --------------- --------------- --------------- -
DB Time(s): 42.9 0.0 48.6 0.1
DB CPU(s): 5.6 0.0 6.8 0.0
Background CPU(s): 3.4 0.0 3.8 0.0
Redo size (bytes): 58,364,739.1 52,693.1 52,350,760.2 52,555.7
Logical read (blocks): 83,306.4 75.2 73,826.5 74.1
Block changes: 145,360.7 131.2 130,416.6 130.9
Physical read (blocks): 66,038.6 59.6 60,512.7 60.8
Physical write (blocks): 69,121.8 62.4 62,962.0 63.2
Read IO requests: 65,944.8 59.5 60,448.0 60.7
Write IO requests: 59,618.4 53.8 55,883.5 56.1
Read IO (MB): 515.9 0.5 472.8 0.5
Write IO (MB): 540.0 0.5 491.9 0.5
Executes (SQL): 1,170.7 1.1 1,045.6 1.1
Rollbacks: 0.0 0.0 0.0 0.0
Transactions: 1,107.6 996.1

and I/O profile:
Bare Metal Virtualized
 
IO Profile Read+Write/Second Read/Second Write/Second Read+Write/Second Read/Second Write/Second
~~~~~~~~~~ ----------------- --------------- --------------- ----------------- --------------- ---------------
Total Requests: 126,471.0 65,950.0 60,521.0 117,014.4 60,452.8 56,561.6
Database Requests: 125,563.2 65,944.8 59,618.4 116,331.5 60,448.0 55,883.5
Optimized Requests: 125,543.0 65,941.1 59,601.9 116,130.7 60,439.9 55,690.7
Redo Requests: 902.2 0.1 902.1 677.1 0.1 677.0
Total (MB): 1,114.0 516.0 598.0 1,016.5 472.8 543.6
Database (MB): 1,055.9 515.9 540.0 964.7 472.8 491.9
Optimized Total (MB): 1,043.1 515.9 527.2 942.6 472.7 469.8
Redo (MB): 57.7 0.0 57.7 51.7 0.0 51.7
Database (blocks): 135,160.4 66,038.6 69,121.8 123,474.7 60,512.7 62,962.0
Via Buffer Cache (blocks): 135,159.8 66,038.5 69,121.3 123,474.0 60,512.6 62,961.4
Direct (blocks): 0.6 0.2 0.5 0.7 0.1 0.5

This is roughly what you can expect from OLTP workload: small data set that fits in flash cache, high redo rate. Of course in OLTP you will have a higher buffer cache, but this is not what I wanted to measure here. It seems that the I/O performance is slightly better in bare metal. This is what we also see on averages:

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Total Wait Wait % DB Wait
Event Waits Time (sec) Avg(ms) time Class
------------------------------ ----------- ---------- ---------- ------ --------
cell single block physical rea 9,272,548 4182.4 0.45 69.3 User I/O
free buffer waits 152,043 1511.5 9.94 25.1 Configur
DB CPU 791.4 13.1
 
Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Total Wait Wait % DB Wait
Event Waits Time (sec) Avg(ms) time Class
------------------------------ ----------- ---------- ---------- ------ --------
cell single block physical rea 7,486,727 3836.2 0.51 63.7 User I/O
free buffer waits 208,658 1840.9 8.82 30.6 Configur
DB CPU 845.9 14.1

It’s interesting to see that even when on I/O bound system there are no significant waits on log file sync.

I’ll focus on ‘log file paralle writes':

Bare Metal
EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
---------------------------------------- --------------- ---------- ------------------------------
log file parallel write 1 0 1 microsecond
log file parallel write 2 0 2 microseconds
log file parallel write 4 0 4 microseconds
log file parallel write 8 0 8 microseconds
log file parallel write 16 0 16 microseconds
log file parallel write 32 0 32 microseconds
log file parallel write 64 0 64 microseconds
log file parallel write 128 0 128 microseconds
log file parallel write 256 8244 256 microseconds
log file parallel write 512 102771 512 microseconds
log file parallel write 1024 14812 1 millisecond
log file parallel write 2048 444 2 milliseconds
log file parallel write 4096 42 4 milliseconds
log file parallel write 8192 11 8 milliseconds
log file parallel write 16384 3 16 milliseconds
log file parallel write 32768 1 32 milliseconds

Virtualized
EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
---------------------------------------- --------------- ---------- ------------------------------
log file parallel write 1 0 1 microsecond
log file parallel write 2 0 2 microseconds
log file parallel write 4 0 4 microseconds
log file parallel write 8 0 8 microseconds
log file parallel write 16 0 16 microseconds
log file parallel write 32 0 32 microseconds
log file parallel write 64 0 64 microseconds
log file parallel write 128 0 128 microseconds
log file parallel write 256 723 256 microseconds
log file parallel write 512 33847 512 microseconds
log file parallel write 1024 41262 1 millisecond
log file parallel write 2048 6483 2 milliseconds
log file parallel write 4096 805 4 milliseconds
log file parallel write 8192 341 8 milliseconds
log file parallel write 16384 70 16 milliseconds
log file parallel write 32768 10 32 milliseconds

As I’ve seen in previous tests, most of the writes where below 512 microseconds in bare metal, and above in virtualized.

And here are the histograms for the single block reads:

Bare Metal
EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
---------------------------------------- --------------- ---------- ------------------------------
cell single block physical read 1 0 1 microsecond
cell single block physical read 2 0 2 microseconds
cell single block physical read 4 0 4 microseconds
cell single block physical read 8 0 8 microseconds
cell single block physical read 16 0 16 microseconds
cell single block physical read 32 0 32 microseconds
cell single block physical read 64 0 64 microseconds
cell single block physical read 128 432 128 microseconds
cell single block physical read 256 2569835 256 microseconds
cell single block physical read 512 5275814 512 microseconds
cell single block physical read 1024 837402 1 millisecond
cell single block physical read 2048 275112 2 milliseconds
cell single block physical read 4096 297320 4 milliseconds
cell single block physical read 8192 4550 8 milliseconds
cell single block physical read 16384 1485 16 milliseconds
cell single block physical read 32768 99 32 milliseconds
cell single block physical read 65536 24 65 milliseconds
cell single block physical read 131072 11 131 milliseconds
cell single block physical read 262144 14 262 milliseconds
cell single block physical read 524288 7 524 milliseconds
cell single block physical read 1048576 4 1 second
cell single block physical read 2097152 1 2 seconds

Virtualized

EVENT WAIT_TIME_MICRO WAIT_COUNT WAIT_TIME_FORMAT
---------------------------------------- --------------- ---------- ------------------------------
cell single block physical read 1 0 1 microsecond
cell single block physical read 2 0 2 microseconds
cell single block physical read 4 0 4 microseconds
cell single block physical read 8 0 8 microseconds
cell single block physical read 16 0 16 microseconds
cell single block physical read 32 0 32 microseconds
cell single block physical read 64 0 64 microseconds
cell single block physical read 128 0 128 microseconds
cell single block physical read 256 518447 256 microseconds
cell single block physical read 512 5371496 512 microseconds
cell single block physical read 1024 1063689 1 millisecond
cell single block physical read 2048 284640 2 milliseconds
cell single block physical read 4096 226581 4 milliseconds
cell single block physical read 8192 16292 8 milliseconds
cell single block physical read 16384 3191 16 milliseconds
cell single block physical read 32768 474 32 milliseconds
cell single block physical read 65536 62 65 milliseconds
cell single block physical read 131072 2 131 milliseconds

Same conclusions here: the ‘less than 256 microseconds’ occurs more frequently in bare metal than virtualized.

For the reference, those tests wer done on similar configuration (except virtualization): X5-2L High Capacity with 3 storage cells, version cell-12.1.2.3.1_LINUX.X64_160411-1.x86_64, flashcache in writeback. Whole system started before the test. This test is the only thing running on the whole database machine.

 

Cet article Exadata X-5 Bare Metal vs. OVM load testing est apparu en premier sur Blog dbi services.

Virtual Private Database…

Bar Solutions - Sun, 2016-07-31 01:39

Some time ago I was asked to assist in fixing or at least finding the cause of a performance problem. The application ran fine until the Virtual Private Database (VPD) policy was applied. Oracle claims there should be near zero impact on your application when you implement VPD, then how is this possible?

First of all, the policy applied was a rather complex one. A complex query should be executed to determine if the current user has access to the record. Let’s say this query takes up a second, then I would expect my query to run about a second slower, maybe two. But the query took several minutes to complete when the VPD policy was applied. This didn’t make sense to me, so I decided to find out what was really happening.
To do this, I opened up my sandbox database to try and recreate this situation.
First I need to create two new users

create user vpd1 identified by vpd1
/
grant connect, resource to vpd1
/
create user vpd2 identified by vpd2
/
grant connect, resource to vpd2
/

Then I created a simple table to hold the data that should be protected by the VPD policy:

drop table emp purge
/
create table emp
       (empno number(4) not null, 
        ename varchar2(10), 
        job varchar2(9), 
        mgr number(4), 
        hiredate date, 
        sal number(7, 2), 
        comm number(7, 2), 
        deptno number(2))
;
insert into emp values (7369, 'SMITH',  'CLERK',     7902, to_date('17-12-1980', 'DD-MM-YYYY'),  800, null, 20);
insert into emp values (7499, 'ALLEN',  'SALESMAN',  7698, to_date('20-02-1981', 'DD-MM-YYYY'), 1600,  300, 30);
insert into emp values (7521, 'WARD',   'SALESMAN',  7698, to_date('22-02-1981', 'DD-MM-YYYY'), 1250,  500, 30);
insert into emp values (7566, 'JONES',  'MANAGER',   7839, to_date('02-04-1981', 'DD-MM-YYYY'),  2975, null, 20);
insert into emp values (7654, 'MARTIN', 'SALESMAN',  7698, to_date('28-09-1981', 'DD-MM-YYYY'), 1250, 1400, 30);
insert into emp values (7698, 'BLAKE',  'MANAGER',   7839, to_date('01-05-1981', 'DD-MM-YYYY'),  2850, null, 30);
insert into emp values (7782, 'CLARK',  'MANAGER',   7839, to_date('09-06-1981', 'DD-MM-YYYY'),  2450, null, 10);
insert into emp values (7788, 'SCOTT',  'ANALYST',   7566, to_date('09-12-1982', 'DD-MM-YYYY'), 3000, null, 20);
insert into emp values (7839, 'KING',   'PRESIDENT', null, to_date('17-11-1981', 'DD-MM-YYYY'), 5000, null, 10);
insert into emp values (7844, 'TURNER', 'SALESMAN',  7698, to_date('08-09-1981', 'DD-MM-YYYY'),  1500,    0, 30);
insert into emp values (7876, 'ADAMS',  'CLERK',     7788, to_date('12-01-1983', 'DD-MM-YYYY'), 1100, null, 20);
insert into emp values (7900, 'JAMES',  'CLERK',     7698, to_date('03-12-1981', 'DD-MM-YYYY'),   950, null, 30);
insert into emp values (7902, 'FORD',   'ANALYST',   7566, to_date('03-12-1981', 'DD-MM-YYYY'),  3000, null, 20);
insert into emp values (7934, 'MILLER', 'CLERK',     7782, to_date('23-01-1982', 'DD-MM-YYYY'), 1300, null, 10);
commit
/
drop table emp_vpd purge
/
create table emp_vpd as select * from emp
/
commit
/

And of course I need to grant access to this table to the newly created users:

grant all on emp_vpd to vpd1
/
grant all on emp_vpd to vpd2
/

On the table I need to create a policy function so I create a package (which mimics the customers package, just simpler) to do this:

create or replace package emp_vpd_policy as
  function first_policy(owner_in   in varchar2
                       ,objname_in in varchar2) return varchar2;
  function allowed(empno_in  in number
                  ,deptno_in in number) return number;
end emp_vpd_policy;
/
sho err
create or replace package body emp_vpd_policy as
  function first_policy(owner_in   in varchar2
                       ,objname_in in varchar2) return varchar2 is
  begin
    dbms_output.put_line('first policy');
    if (user = 'VPD1') then
      return 'emp_vpd_policy.allowed(emp_vpd.empno, emp_vpd.deptno)=10';
    elsif user = 'VPD2' then
      return 'emp_vpd_policy.allowed(emp_vpd.empno, emp_vpd.deptno)=20';
    else
      return '1=1';
    end if;
  end first_policy;
  function allowed(empno_in  in number
                  ,deptno_in in number) return number is
  begin
    dbms_output.put_line('emp_vpd_policy.allowed(' || empno_in || ',' || deptno_in || ')');
    return deptno_in;
  end allowed;
end emp_vpd_policy;
/
sho err

and then protect the EMP_VPD table using a policy:

begin
  sys.dbms_rls.add_policy(object_schema => 'DEMO'
                         ,object_name => 'EMP_VPD'
                         ,policy_name => 'EMP_VPD_SEL'
                         ,function_schema => '&myuser'
                         ,policy_function => 'EMP_VPD_POLICY.FIRST_POLICY'
                         ,statement_types => 'SELECT');
end;
/

The package will show what will happen when I perform a select on the table:

conn vpd1/vpd1
set serveroutput on size unlimited
select * from demo.emp_vpd
/
EMPNO ENAME      JOB         MGR HIREDATE          SAL      COMM DEPTNO
----- ---------- --------- ----- ----------- --------- --------- ------
 7782 CLARK      MANAGER    7839 6/9/1981      2450.00               10
 7839 KING       PRESIDENT       11/17/1981    5000.00               10
 7934 MILLER     CLERK      7782 1/23/1982     1300.00               10
first policy
first policy
emp_vpd_policy.allowed(7369,20)
emp_vpd_policy.allowed(7499,30)
emp_vpd_policy.allowed(7521,30)
emp_vpd_policy.allowed(7566,20)
emp_vpd_policy.allowed(7654,30)
emp_vpd_policy.allowed(7698,30)
emp_vpd_policy.allowed(7782,10)
emp_vpd_policy.allowed(7788,20)
emp_vpd_policy.allowed(7839,10)
emp_vpd_policy.allowed(7844,30)
emp_vpd_policy.allowed(7876,20)
emp_vpd_policy.allowed(7900,30)
emp_vpd_policy.allowed(7902,20)
emp_vpd_policy.allowed(7934,10)

In my case this is done rather quickly, there’s almost no difference in timing for the query with or without the policy applied. But as you can see, the policy is executed for each and every record that is being checked. Well, not really the policy itself, but the function that is defined in the policy. So if this function takes a lot of time and your table has a lot of records then the query will run for a very long time. There has got to be a better way to do this.
Let’s analyze what happens, the actual policy is executed twice. What if we use this architecture to our benefit. In the first pass we can setup some in memory data structure to hold whatever we need, this might take some time and then in the second pass we can use this data to be used in the actual check.
First we drop the policy so we can create a new one:

begin
  sys.dbms_rls.drop_policy(object_schema => '&myuser'
                          ,object_name => 'EMP_VPD'
                          ,policy_name => 'EMP_VPD_SEL');
end;
/

For our implementation we need a Nested Table type to be created in the database:

create or replace type empnos_tt is table of number(4) 
/

Then we create a new package to hold the policy function.

create or replace package emp_vpd_pp as
  function sel( owner_in   in varchar2
              , objname_in in varchar2
              ) return varchar2;
  function read_g_empnos return empnos_tt;
end emp_vpd_pp;
/ 
sho err

The function SEL will be used in the policy. The function READ_G_EMPNOS is needed to retrieve the data in the package variable. Then the actual implementation of the package:

create or replace package body emp_vpd_pp as
  g_empnos empnos_tt;
  beenhere boolean := false;
  function sel( owner_in   in varchar2
              , objname_in in varchar2
              ) return varchar2 is
  begin
    if not(beenhere) then
      if user = 'VPD1' then
        begin
          select emp.empno
            bulk collect into g_empnos
            from emp
           where emp.deptno = 10;
         exception
           when others then
           dbms_output.put_line(sqlerrm);
         end;
      elsif user = 'VPD2' then
        begin
          select emp.empno
            bulk collect into g_empnos
            from emp
           where emp.deptno = 20;
         exception
           when others then
           dbms_output.put_line(sqlerrm);
         end;
      end if;
    end if;
    beenhere := not(beenhere);
    if ((user = 'VPD1') or (user = 'VPD2')) then
      return 'emp_vpd.empno in (select column_value
                                  from table(emp_vpd_pp.read_g_empnos))';
    else
      return '1=1';
    end if;
  end sel;
  function read_g_empnos return empnos_tt
    is
    begin
      return (g_empnos);
    end;
begin
  beenhere := false; 
end emp_vpd_pp;
/
sho err

In the initialization section of the package we initialize the Boolean variable. Then, when the policy function is executed for the first time (per query) we select the column values we need and save that into the package variable. The second time we execute the policy function we use the values saved in the predicate that is being added.

begin
  sys.dbms_rls.add_policy(object_schema => 'DEMO'
                         ,object_name => 'EMP_VPD'
                         ,policy_name => 'EMP_VPD_SEL'
                         ,function_schema => 'DEMO'
                         ,policy_function => 'EMP_VPD_PP.SEL'
                         ,statement_types => 'SELECT');
end;
/

Notice the predicate with the use of the Nested Table is executed always, but the Nested Table is only filled up in the first execution of the policy function. Using this technique the database only has to execute the expensive query once and its result can be used multiple times at almost no cost.
Using this policy function has exactly the same result, but the execution improved dramatically. Using this technique the database only has to execute the expensive query once per query instead of for every row.

This post is also available at the AMIS blog

PCFDev application accessing an Oracle 11g RDBMS

Pas Apicella - Sat, 2016-07-30 21:04
PCF Dev is a small footprint distribution of Pivotal Cloud Foundry (PCF) intended to be run locally on a developer machine. It delivers the essential elements of the Pivotal Cloud Foundry experience quickly through a condensed set of components. PCF Dev is ideally suited to developers wanting to explore or evaluate PCF, or those already actively building cloud native applications to be run on PCF. Working with PCF Dev, developers can experience the power of PCF - from the accelerated development cycles enabled by consistent, structured builds to the operational excellence unlocked through integrated logging, metrics and health monitoring and management.

In this example we show how you can use PCFDev to access an Oracle RDBMS from a PCFDev deployed Spring Boot Application. The application is using the classic Oracle EMP database table found in the SCOTT schema

Source Code as follows


In order to use the steps below you have to have installed PCFDev on your laptop or desktop as per the link below.

Download from here:

Instructions to setup as follows:

Steps

1. Clone as shown below

$ git clone https://github.com/papicella/PCFOracleDemo.git

2. Edit "./PCFOracleDemo/src/main/resources/application.properties" and add your oracle EMP schema connection details

error.whitelabel.enabled=false

oracle.username=scott
oracle.password=tiger
oracle.url=jdbc:oracle:thin:@//192.168.20.131:1521/ora11gr2

3. Define a local MAVEN repo for Oracle 11g JDBC driver as per what is in the pom.xml
  
<!--
Installed as follows to allow inclusion into pom.xml
mvn install:install-file -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0.3 -Dpackaging=jar -Dfile=ojdbc6.jar
-DgeneratePom=true
-->
<dependency>
<groupId>com.oracle</groupId>
<artifactId>ojdbc6</artifactId>
<version>11.2.0.3</version>
</dependency>

4. Package as per below

$ cd PCFOracleDemo
$ mvn package

5. Deploy as follows

pasapicella@pas-macbook:~/pivotal/DemoProjects/spring-starter/pivotal/PCFOracleDemo$ cf push
Using manifest file /Users/pasapicella/pivotal/DemoProjects/spring-starter/pivotal/PCFOracleDemo/manifest.yml

Creating app springboot-oracle in org pcfdev-org / space pcfdev-space as admin...
OK

Creating route springboot-oracle.local.pcfdev.io...
OK

Binding springboot-oracle.local.pcfdev.io to springboot-oracle...
OK

Uploading springboot-oracle...
Uploading app files from: /var/folders/c3/27vscm613fjb6g8f5jmc2x_w0000gp/T/unzipped-app506692756
Uploading 26.3M, 154 files
Done uploading
OK

Starting app springboot-oracle in org pcfdev-org / space pcfdev-space as admin...
Downloading binary_buildpack...
Downloading python_buildpack...
Downloading staticfile_buildpack...
Downloading java_buildpack...
Downloading php_buildpack...
Downloading ruby_buildpack...
Downloading go_buildpack...
Downloading nodejs_buildpack...
Downloaded staticfile_buildpack
Downloaded binary_buildpack (8.3K)
Downloaded php_buildpack (262.3M)
Downloaded java_buildpack (241.6M)
Downloaded go_buildpack (450.3M)
Downloaded ruby_buildpack (247.7M)
Downloaded python_buildpack (254.1M)
Downloaded nodejs_buildpack (60.7M)
Creating container
Successfully created container
Downloading app package...
Downloaded app package (23.5M)
Staging...
-----> Java Buildpack Version: v3.6 (offline) | https://github.com/cloudfoundry/java-buildpack.git#5194155
-----> Downloading Open Jdk JRE 1.8.0_71 from https://download.run.pivotal.io/openjdk/trusty/x86_64/openjdk-1.8.0_71.tar.gz (found in cache)
       Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.2s)
-----> Downloading Open JDK Like Memory Calculator 2.0.1_RELEASE from https://download.run.pivotal.io/memory-calculator/trusty/x86_64/memory-calculator-2.0.1_RELEASE.tar.gz (found in cache)
       Memory Settings: -XX:MetaspaceSize=64M -XX:MaxMetaspaceSize=64M -Xss995K -Xmx382293K -Xms382293K
-----> Downloading Spring Auto Reconfiguration 1.10.0_RELEASE from https://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.10.0_RELEASE.jar (found in cache)
Exit status 0
Staging complete
Uploading droplet, build artifacts cache...
Uploading build artifacts cache...
Uploading droplet...
Uploaded build artifacts cache (109B)
Uploaded droplet (68.4M)
Uploading complete

1 of 1 instances running

App started


OK

App springboot-oracle was started using this command `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.1_RELEASE -memorySizes=metaspace:64m.. -memoryWeights=heap:75,metaspace:10,native:10,stack:5 -memoryInitials=heap:100%,metaspace:100% -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/.:$PWD/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.10.0_RELEASE.jar org.springframework.boot.loader.JarLauncher`

Showing health and status for app springboot-oracle in org pcfdev-org / space pcfdev-space as admin...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: springboot-oracle.local.pcfdev.io
last uploaded: Sun Jul 31 01:23:03 UTC 2016
stack: unknown
buildpack: java-buildpack=v3.6-offline-https://github.com/cloudfoundry/java-buildpack.git#5194155 java-main open-jdk-like-jre=1.8.0_71 open-jdk-like-memory-calculator=2.0.1_RELEASE spring-auto-reconfiguration=1.10.0_RELEASE

     state     since                    cpu    memory      disk        details
#0   running   2016-07-31 11:24:26 AM   0.0%   0 of 512M   0 of 512M
pasapicella@pas-macbook:~/pivotal/DemoProjects/spring-starter/pivotal/PCFOracleDemo$ cf apps
Getting apps in org pcfdev-org / space pcfdev-space as admin...
OK

name                requested state   instances   memory   disk   urls
springboot-oracle   started           1/1         512M     512M   springboot-oracle.local.pcfdev.io

6. Access deployed application at the end point "http://springboot-oracle.local.pcfdev.io" or using the application route you set in the manifest.yml



Categories: Fusion Middleware

PCFDev application accessing an Oracle 11g RDBMS

Pas Apicella - Sat, 2016-07-30 21:04
PCF Dev is a small footprint distribution of Pivotal Cloud Foundry (PCF) intended to be run locally on a developer machine. It delivers the essential elements of the Pivotal Cloud Foundry experience quickly through a condensed set of components. PCF Dev is ideally suited to developers wanting to explore or evaluate PCF, or those already actively building cloud native applications to be run on PCF. Working with PCF Dev, developers can experience the power of PCF - from the accelerated development cycles enabled by consistent, structured builds to the operational excellence unlocked through integrated logging, metrics and health monitoring and management.

In this example we show how you can use PCFDev to access an Oracle RDBMS from a PCFDev deployed Spring Boot Application. The application is using the classic Oracle EMP database table found in the SCOTT schema

Source Code as follows


In order to use the steps below you have to have installed PCFDev on your laptop or desktop as per the link below.

Download from here:

Instructions to setup as follows:

Steps

1. Clone as shown below

$ git clone https://github.com/papicella/PCFOracleDemo.git

2. Edit "./PCFOracleDemo/src/main/resources/application.properties" and add your oracle EMP schema connection details

error.whitelabel.enabled=false

oracle.username=scott
oracle.password=tiger
oracle.url=jdbc:oracle:thin:@//192.168.20.131:1521/ora11gr2

3. Define a local MAVEN repo for Oracle 11g JDBC driver as per what is in the pom.xml

  
<!--
Installed as follows to allow inclusion into pom.xml
mvn install:install-file -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0.3 -Dpackaging=jar -Dfile=ojdbc6.jar
-DgeneratePom=true
-->
<dependency>
<groupId>com.oracle</groupId>
<artifactId>ojdbc6</artifactId>
<version>11.2.0.3</version>
</dependency>


4. Package as per below

$ cd PCFOracleDemo
$ mvn package

5. Deploy as follows

pasapicella@pas-macbook:~/pivotal/DemoProjects/spring-starter/pivotal/PCFOracleDemo$ cf push
Using manifest file /Users/pasapicella/pivotal/DemoProjects/spring-starter/pivotal/PCFOracleDemo/manifest.yml

Creating app springboot-oracle in org pcfdev-org / space pcfdev-space as admin...
OK

Creating route springboot-oracle.local.pcfdev.io...
OK

Binding springboot-oracle.local.pcfdev.io to springboot-oracle...
OK

Uploading springboot-oracle...
Uploading app files from: /var/folders/c3/27vscm613fjb6g8f5jmc2x_w0000gp/T/unzipped-app506692756
Uploading 26.3M, 154 files
Done uploading
OK

Starting app springboot-oracle in org pcfdev-org / space pcfdev-space as admin...
Downloading binary_buildpack...
Downloading python_buildpack...
Downloading staticfile_buildpack...
Downloading java_buildpack...
Downloading php_buildpack...
Downloading ruby_buildpack...
Downloading go_buildpack...
Downloading nodejs_buildpack...
Downloaded staticfile_buildpack
Downloaded binary_buildpack (8.3K)
Downloaded php_buildpack (262.3M)
Downloaded java_buildpack (241.6M)
Downloaded go_buildpack (450.3M)
Downloaded ruby_buildpack (247.7M)
Downloaded python_buildpack (254.1M)
Downloaded nodejs_buildpack (60.7M)
Creating container
Successfully created container
Downloading app package...
Downloaded app package (23.5M)
Staging...
-----> Java Buildpack Version: v3.6 (offline) | https://github.com/cloudfoundry/java-buildpack.git#5194155
-----> Downloading Open Jdk JRE 1.8.0_71 from https://download.run.pivotal.io/openjdk/trusty/x86_64/openjdk-1.8.0_71.tar.gz (found in cache)
       Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.2s)
-----> Downloading Open JDK Like Memory Calculator 2.0.1_RELEASE from https://download.run.pivotal.io/memory-calculator/trusty/x86_64/memory-calculator-2.0.1_RELEASE.tar.gz (found in cache)
       Memory Settings: -XX:MetaspaceSize=64M -XX:MaxMetaspaceSize=64M -Xss995K -Xmx382293K -Xms382293K
-----> Downloading Spring Auto Reconfiguration 1.10.0_RELEASE from https://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.10.0_RELEASE.jar (found in cache)
Exit status 0
Staging complete
Uploading droplet, build artifacts cache...
Uploading build artifacts cache...
Uploading droplet...
Uploaded build artifacts cache (109B)
Uploaded droplet (68.4M)
Uploading complete

1 of 1 instances running

App started


OK

App springboot-oracle was started using this command `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.1_RELEASE -memorySizes=metaspace:64m.. -memoryWeights=heap:75,metaspace:10,native:10,stack:5 -memoryInitials=heap:100%,metaspace:100% -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/.:$PWD/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.10.0_RELEASE.jar org.springframework.boot.loader.JarLauncher`

Showing health and status for app springboot-oracle in org pcfdev-org / space pcfdev-space as admin...
OK

requested state: started
instances: 1/1
usage: 512M x 1 instances
urls: springboot-oracle.local.pcfdev.io
last uploaded: Sun Jul 31 01:23:03 UTC 2016
stack: unknown
buildpack: java-buildpack=v3.6-offline-https://github.com/cloudfoundry/java-buildpack.git#5194155 java-main open-jdk-like-jre=1.8.0_71 open-jdk-like-memory-calculator=2.0.1_RELEASE spring-auto-reconfiguration=1.10.0_RELEASE

     state     since                    cpu    memory      disk        details
#0   running   2016-07-31 11:24:26 AM   0.0%   0 of 512M   0 of 512M
pasapicella@pas-macbook:~/pivotal/DemoProjects/spring-starter/pivotal/PCFOracleDemo$ cf apps
Getting apps in org pcfdev-org / space pcfdev-space as admin...
OK

name                requested state   instances   memory   disk   urls
springboot-oracle   started           1/1         512M     512M   springboot-oracle.local.pcfdev.io

6. Access deployed application at the end point "http://springboot-oracle.local.pcfdev.io" or using the application route you set in the manifest.yml



Categories: Fusion Middleware

Oracle serializable is not serializable

Yann Neuhaus - Sat, 2016-07-30 17:17

Did you know that when you set isolation level to SERIALIZABLE, it is not serializable but SNAPSHOT? This isolation levels is lower than serializable. I’ve never thought about it until I read Markus Winand slides about transactions. I recommend every developer or DBA to read those slides. This post is there to illustrate write skew in Oracle.

Let’s show an example on SCOTT.EMP table. Let’s say there’s a HR directive to increase one of department 10 employees salary so that total salaries for the department is 9000.
Now let’s imagine that two HR users received the directive at the same time.

User A checks the salaries:

23:18:33 SID=365> select ename,sal from EMP where deptno=10;
 
ENAME SAL
---------- ----------
CLARK 2450
KING 5000
MILLER 1300

The sum is 8750 so User A decides to increase MILLER’s salary with additional 250.

However, to be sure that he is the only one to do that, he starts a transaction in SERIALIZABLE isolation level, checks the sum again, and do the update:

23:18:40 SID=365> set transaction isolation level serializable;
Transaction set.
 
23:18:41 SID=365> select sum(sal) from EMP where deptno=10;
 
SUM(SAL)
----------
8750
 
23:18:44 SID=365> update EMP set sal=sal+250 where ename='MILLER';
1 row updated.

Now at the same time, User B is doing the same but chose to increase CLARK’s salary:


23:18:30 SID=12> set transaction isolation level serializable;
Transaction set.
 
23:18:51 SID=12> select sum(sal) from EMP where deptno=10;
 
SUM(SAL)
----------
8750
 
23:18:53 SID=12> update EMP set sal=sal+250 where ename='CLARK';
1 row updated.

Note that there is no “ORA-08177: can’t serialize access for this transaction” there because the updates occurs on different rows.

The User A checks again the sum and then commits his transaction:


23:18:46 SID=365> select sum(sal) from EMP where deptno=10;
 
SUM(SAL)
----------
9000
 
23:19:04 SID=365> commit;
Commit complete.

And so does the User B:


23:18:55 SID=12> select sum(sal) from EMP where deptno=10;
 
SUM(SAL)
----------
9000
 
23:19:08 SID=12> commit;
Commit complete.

However, once you commit, the result is different:


23:19:09 SID=12> select sum(sal) from EMP where deptno=10;
 
SUM(SAL)
----------
9250

Actually, what Oracle calls SERIALIZABLE here is only SNAPSHOT isolation level. You see data without the concurrent changes that have been commited after the beginning of your transaction. And you cannot modify a row that has been modified by another session. However, nothing prevents that what you have read is modified by another session. You don’t see those modification, but they can be commited.

The definition of serializability requires that the result is the same when transactions occurs one after the other. Here, if User A had commited before the start of User B transaction, the latter would have seen that the total were already at 9000.

In this example, if you want to prevent write skew you need to lock the table in Share mode. Locking the rows (with select for update) is sufficient to prevent concurrent updates, but then another user can insert a new employee which brings the total salary higher. In addition to that, row locks are exclusive and you don’t want readers to block readers. Locking a range (DEPTNO=10) is not possible in Oracle. So the solution is to lock the table.

It seems that only PostgreSQL (version >= 9.1) is able to guarantee true serializability without locking.

 

Cet article Oracle serializable is not serializable est apparu en premier sur Blog dbi services.

What convention to use for PDB?

Michael Dinh - Sat, 2016-07-30 11:55

I don’t know but best to have one.

With multi-tenant database, I would use OMF to keep it simple and to avoid having to remember about FILE_NAME_CONVERT.

When creating PDB, there may be specification for ADMIN USER as show below:

CREATE PLUGGABLE DATABASE april ADMIN USER april_admin IDENTIFIED BY oracle;

Here are some examples I have come across (Names and Blogs are not provided to protect the innocents):

create pluggable database PDB1 admin user pdb1_admin 
CREATE PLUGGABLE DATABASE TESTPDB ADMIN USER TESTADMIN
CREATE PLUGGABLE DATABASE devpdb2 ADMIN USER pdb2dba
CREATE PLUGGABLE DATABASE mypdb ADMIN USER pdbadmin
CREATE PLUGGABLE DATABASE pdb3 ADMIN USER pdb_admadmin user App_Admin

It’s no big deal, right? It’s always possible to dig for it.

Just playing around.

CDB$ROOT:(SYS@tmnt):PRIMARY> @/media/sf_working/sql/cdb_tbs.sql

 CON_ID NAME     TABLESPACE_NAME      FILE_NAME                                                                                  ONLINE_STATUS
------- -------- -------------------- ------------------------------------------------------------------------------------------ -------------
      1 CDB$ROOT SYSAUX               /oradata/TMNT_A/datafile/o1_mf_sysaux_crx20pot_.dbf                                        ONLINE
      1 CDB$ROOT UNDOTBS1             /oradata/TMNT_A/datafile/o1_mf_undotbs1_crx20ssy_.dbf                                      ONLINE
      1 CDB$ROOT USERS                /oradata/TMNT_A/datafile/o1_mf_users_crx20tq8_.dbf                                         ONLINE
      1 CDB$ROOT SYSTEM               /oradata/TMNT_A/datafile/o1_mf_system_crx20gsp_.dbf                                        SYSTEM
      2 PDB$SEED
      3 APRIL    SYSAUX               /oradata/TMNT_A/380455732D1B1EFFE0530100007F18AB/datafile/o1_mf_sysaux_crx522d1_.dbf       ONLINE
      3 APRIL    USERS                /oradata/TMNT_A/380455732D1B1EFFE0530100007F18AB/datafile/o1_mf_users_crx522d2_.dbf        ONLINE
      3 APRIL    SYSTEM               /oradata/TMNT_A/380455732D1B1EFFE0530100007F18AB/datafile/o1_mf_system_crx522cr_.dbf       SYSTEM

8 rows selected.

CDB$ROOT:(SYS@tmnt):PRIMARY> @/media/sf_working/sql/cdb_role_privs.sql

GRANTEE              GRANTED_ROLE
-------------------- --------------------
C##GGS_ADMIN         CDB_DBA
C##GGS_ADMIN         CDB_DBA
DBSNMP               CDB_DBA
DBSNMP               CDB_DBA
SYS                  CDB_DBA
SYS                  CDB_DBA
C##GGS_ADMIN         DBA
C##GGS_ADMIN         DBA
MDINH                DBA
SYS                  DBA
SYS                  DBA
SYSTEM               DBA
SYSTEM               DBA
APRIL_ADMIN          PDB_DBA
C##GGS_ADMIN         PDB_DBA
C##GGS_ADMIN         PDB_DBA
SYS                  PDB_DBA
SYS                  PDB_DBA
DBA                  XDBADMIN
DBA                  XDBADMIN
SYS                  XDBADMIN
SYS                  XDBADMIN

22 rows selected.

CDB$ROOT:(SYS@tmnt):PRIMARY> CREATE PLUGGABLE DATABASE donnie FROM april;

Pluggable database created.

CDB$ROOT:(SYS@tmnt):PRIMARY> @/media/sf_working/sql/cdb_tbs.sql

 CON_ID NAME     TABLESPACE_NAME      FILE_NAME                                                                                  ONLINE_STATUS
------- -------- -------------------- ------------------------------------------------------------------------------------------ -------------
      1 CDB$ROOT SYSAUX               /oradata/TMNT_A/datafile/o1_mf_sysaux_crx20pot_.dbf                                        ONLINE
      1 CDB$ROOT UNDOTBS1             /oradata/TMNT_A/datafile/o1_mf_undotbs1_crx20ssy_.dbf                                      ONLINE
      1 CDB$ROOT USERS                /oradata/TMNT_A/datafile/o1_mf_users_crx20tq8_.dbf                                         ONLINE
      1 CDB$ROOT SYSTEM               /oradata/TMNT_A/datafile/o1_mf_system_crx20gsp_.dbf                                        SYSTEM
      2 PDB$SEED
      3 APRIL    SYSAUX               /oradata/TMNT_A/380455732D1B1EFFE0530100007F18AB/datafile/o1_mf_sysaux_crx522d1_.dbf       ONLINE
      3 APRIL    USERS                /oradata/TMNT_A/380455732D1B1EFFE0530100007F18AB/datafile/o1_mf_users_crx522d2_.dbf        ONLINE
      3 APRIL    SYSTEM               /oradata/TMNT_A/380455732D1B1EFFE0530100007F18AB/datafile/o1_mf_system_crx522cr_.dbf       SYSTEM
      4 DONNIE

9 rows selected.

Where are the tablespaces and datafiles for DONNIE?

CDB$ROOT:(SYS@tmnt):PRIMARY> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 APRIL                          READ WRITE NO
         4 DONNIE                         MOUNTED
CDB$ROOT:(SYS@tmnt):PRIMARY> alter pluggable database donnie open;

Pluggable database altered.

CDB$ROOT:(SYS@tmnt):PRIMARY> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 APRIL                          READ WRITE NO
         4 DONNIE                         READ WRITE NO
CDB$ROOT:(SYS@tmnt):PRIMARY> @/media/sf_working/sql/cdb_tbs.sql

 CON_ID NAME     TABLESPACE_NAME      FILE_NAME                                                                                  ONLINE_STATUS
------- -------- -------------------- ------------------------------------------------------------------------------------------ -------------
      1 CDB$ROOT SYSAUX               /oradata/TMNT_A/datafile/o1_mf_sysaux_crx20pot_.dbf                                        ONLINE
      1 CDB$ROOT USERS                /oradata/TMNT_A/datafile/o1_mf_users_crx20tq8_.dbf                                         ONLINE
      1 CDB$ROOT UNDOTBS1             /oradata/TMNT_A/datafile/o1_mf_undotbs1_crx20ssy_.dbf                                      ONLINE
      1 CDB$ROOT SYSTEM               /oradata/TMNT_A/datafile/o1_mf_system_crx20gsp_.dbf                                        SYSTEM
      2 PDB$SEED
      3 APRIL    SYSAUX               /oradata/TMNT_A/380455732D1B1EFFE0530100007F18AB/datafile/o1_mf_sysaux_crx522d1_.dbf       ONLINE
      3 APRIL    USERS                /oradata/TMNT_A/380455732D1B1EFFE0530100007F18AB/datafile/o1_mf_users_crx522d2_.dbf        ONLINE
      3 APRIL    SYSTEM               /oradata/TMNT_A/380455732D1B1EFFE0530100007F18AB/datafile/o1_mf_system_crx522cr_.dbf       SYSTEM
      4 DONNIE   SYSAUX               /oradata/TMNT_A/38DDD14641845D0EE0530100007F0289/datafile/o1_mf_sysaux_csso885x_.dbf       ONLINE
      4 DONNIE   USERS                /oradata/TMNT_A/38DDD14641845D0EE0530100007F0289/datafile/o1_mf_users_csso885y_.dbf        ONLINE
      4 DONNIE   SYSTEM               /oradata/TMNT_A/38DDD14641845D0EE0530100007F0289/datafile/o1_mf_system_csso885k_.dbf       SYSTEM

11 rows selected.

PDB DONNIE does not have ADMIN USER.

CDB$ROOT:(SYS@tmnt):PRIMARY> @/media/sf_working/sql/cdb_role_privs.sql

GRANTEE              GRANTED_ROLE
-------------------- --------------------
C##GGS_ADMIN         CDB_DBA
C##GGS_ADMIN         CDB_DBA
C##GGS_ADMIN         CDB_DBA
DBSNMP               CDB_DBA
DBSNMP               CDB_DBA
DBSNMP               CDB_DBA
SYS                  CDB_DBA
SYS                  CDB_DBA
SYS                  CDB_DBA
C##GGS_ADMIN         DBA
C##GGS_ADMIN         DBA
C##GGS_ADMIN         DBA
MDINH                DBA
MDINH                DBA
SYS                  DBA
SYS                  DBA
SYS                  DBA
SYSTEM               DBA
SYSTEM               DBA
SYSTEM               DBA
APRIL_ADMIN          PDB_DBA
APRIL_ADMIN          PDB_DBA
C##GGS_ADMIN         PDB_DBA
C##GGS_ADMIN         PDB_DBA
C##GGS_ADMIN         PDB_DBA
SYS                  PDB_DBA
SYS                  PDB_DBA
SYS                  PDB_DBA
DBA                  XDBADMIN
DBA                  XDBADMIN
DBA                  XDBADMIN
SYS                  XDBADMIN
SYS                  XDBADMIN
SYS                  XDBADMIN

34 rows selected.

CDB$ROOT:(SYS@tmnt):PRIMARY>

UPDATED: Looks like APRIL_ADMIN is for 2 different PDB's. Now that's confusing.

CDB$ROOT:(SYS@tmnt):PRIMARY> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 APRIL                          READ WRITE NO
         4 DONNIE                         READ WRITE NO
CDB$ROOT:(SYS@tmnt):PRIMARY> @/media/sf_working/sql/cdb_role_privs.sql

GRANTEE              GRANTED_ROLE          CON_ID
-------------------- -------------------- -------
C##GGS_ADMIN         CDB_DBA                    1
C##GGS_ADMIN         CDB_DBA                    3
C##GGS_ADMIN         CDB_DBA                    4
DBSNMP               CDB_DBA                    1
DBSNMP               CDB_DBA                    3
DBSNMP               CDB_DBA                    4
SYS                  CDB_DBA                    1
SYS                  CDB_DBA                    3
SYS                  CDB_DBA                    4
C##GGS_ADMIN         DBA                        1
C##GGS_ADMIN         DBA                        3
C##GGS_ADMIN         DBA                        4
MDINH                DBA                        3
MDINH                DBA                        4
SYS                  DBA                        1
SYS                  DBA                        3
SYS                  DBA                        4
SYSTEM               DBA                        1
SYSTEM               DBA                        3
SYSTEM               DBA                        4
APRIL_ADMIN          PDB_DBA                    3
APRIL_ADMIN          PDB_DBA                    4
C##GGS_ADMIN         PDB_DBA                    1
C##GGS_ADMIN         PDB_DBA                    3
C##GGS_ADMIN         PDB_DBA                    4
SYS                  PDB_DBA                    1
SYS                  PDB_DBA                    3
SYS                  PDB_DBA                    4
DBA                  XDBADMIN                   1
DBA                  XDBADMIN                   3
DBA                  XDBADMIN                   4
SYS                  XDBADMIN                   1
SYS                  XDBADMIN                   3
SYS                  XDBADMIN                   4

34 rows selected.

CDB$ROOT:(SYS@tmnt):PRIMARY>


Wordpress Fatal error: Call to undefined method PHPMailer::getCustomHeaders()

Pakistan's First Oracle Blog - Sat, 2016-07-30 03:06

With numerous new themes and new versions of the wordpress, when you try to publish some post or page or try to send email or try to use contact form, you might get following error:


Fatal error: Call to undefined method PHPMailer::getCustomHeaders()





This error normally occurs if you are using incompatible wordpress and theme and plugin versions. The best course of action is to update your wordpress, your theme, and all the plugin and make sure that contact form 7 plugin isn't complaining about the wordpress version.

If you cannot upgrade your versions or if the error is occuring even after the upgrade, then you can do the following workaround:

Workaround:

Make the following change in your mail.php file in the directory wordpress/wp-content/plugins/contact-form-7/includes

Go to the line 171 and change:
 
add_action( ‘phpmailer_init’, ‘wpcf7_phpmailer_init’ );


to
 

//add_action( ‘phpmailer_init’, ‘wpcf7_phpmailer_init’ ); 

Hope that helps.
Categories: DBA Blogs

Hit by the ‘Tech Support’ Scammers

Duncan Davies - Fri, 2016-07-29 12:19

I got a call earlier today from the Tech Support Scammers. You’ve probably heard of this horribly unethical practice already, but the premise is that they cold-call seemingly randomly and try to convince you that there is a problem with your PC/router, and then attempt to get you to allow them remote access to your PC to ‘fix it’. Some then claim problems are due to expired warranties on the computer and demand payment, others setup a boot password and demand money for unlocking it. Either way, it’s a nasty thing to do and is taking advantage of people while pretending to help.

So, I thought I’d play along and see what they’d do.

The call appeared to come from a UK number, however it clearly originated from an offshore data centre. They introduced themselves as calling from my ISP and said that they’d detected a problem and needed to fix it on my PC. They could show me some symptoms to reassure me that they were legitimate. I asked them “who is my ISP?” as I wasn’t sure whether they’d know, however they were able to accurately answer.

The nice lady then got me to open Event Viewer and tell me how many errors and warnings were listed.

Event Viewer

I’d wager that most computers have a heap of entries here, but when I said that there was over 8,000 she did some great play-acting that 8-10 is a typical number, feigning shock and how this proves the appalling state of my laptop.

Next, she asked me to open a command-prompt and run the ASSOC command. This lists all the file associations on my laptop, and she read out a string and asked me to verify that it matched the CLSID below.

assoc

This, of course, proves nothing as it would be the same on all versions of Windows. However, she said that CLSID stands for “Computer Licence Security ID” and that by knowing this code it proved her authenticity.

At this point she asked me to visit http://www.support.me, which forwards to LogMeIn – a remote control system – and said she was going to pass me over to a technical colleague called ‘David’.

When David came on the line I said I’d used the lengthy delay to google what had happened and found out what they were doing. He then swore at me and hung up.

I feel sorry for the poor people that fall victim to these horrible practices and wish there was some way of stopping them preying on the innocent.


Real Time Scenarios & duplicate rows

Tom Kyte - Fri, 2016-07-29 11:26
I have two questions 1.) Deleting Duplicate rows DELETE FROM EMP1 WHERE ROWID NOT IN (SELECT MIN(ROWID) FROM EMP1 GROUP BY EMPNO); the above command deletes the duplicate rows from a given table such as emp1 in this case, group by eliminat...
Categories: DBA Blogs

DBA essentials workshop with release Oracle 12c is Available

Yann Neuhaus - Fri, 2016-07-29 03:23

It’s with pleasure to announced dbi services have upgrade the infrastructure for Oracle DBA essentials workshop.
We have migrate the release of Oracle 11g R2 to the latest version Oracle 12c features and install the last PSU (April 2016).

During this course you understand different topics:

  • Understand the Oracle database architecture
  • Install and Manage Oracle database
  • Administer Oracle databases with dbi expert certified OCM or ACE director :-)

For more information about the workshop, please click on the link

Oracle Database release:

SQL> select product, version, status from product_component_version where product like '%Database%';


PRODUCT                                    VERSION              STATUS
------------------------------------------ -------------------- ------------------------------
Oracle Database 12c Enterprise Edition     12.1.0.2.0           64bit Production

Last PSU installed:

SQL> select patch_id, version, status, description from dba_registry_sqlpatch;


PATCH_ID   VERSION              STATUS          DESCRIPTION
---------- -------------------- --------------- ------------------------------------------------------
22291127   12.1.0.2             SUCCESS         Database Patch Set Update : 12.1.0.2.160419 (22291127)

020_ORA-DBA-Essential-OracleArchitecture

 

Cet article DBA essentials workshop with release Oracle 12c is Available est apparu en premier sur Blog dbi services.

Shrinking Oracle VM VirtualBox with Zerofree

Yann Neuhaus - Fri, 2016-07-29 03:17

In this blog I would like to talk about Shrinking a Virtual Machine with Oracle databases and MySQL which is install.
Unfortunately, whilst Virtual Box will dynamically expand the hard drive as it’s required, it won’t dynamically shrink it again if you free up space in the VM. The good news is You can shrink a dynamic Virtual Box disk image and so reduce the size of your VM infrastructure.

I have use the Zerofree utility for scans the free blocks in an ext2 and ext3 file system and fills any non-zero blocks with zeroes. Source

Step by step:
  • Convert .vmdk to .vdi
  • Mount the .vdi to another VM
  • Stop processes Oracle and MySQL
  • Mount file system on read only
  • Use zerofree
  • Shutdown VM
  • Vboxmanage compact
  • Convert .vdi to .vmdk is you find any reason not to keep the .vdi

I have my Virtual Box extension “.vmdk”. The fist step is clone the virtual-disk to “.vdi” extension.

root@computer:/media/pio/Elements1/Workshop Oracle DBA1 11g/vmreforadba01# vboxmanage clonehd "vmrefdba01.vmdk" "vmrefdba01.vdi" --format vdi
0%...10%...20%...30%...40%...50%...60%...70%...80%...

When the clone is finish, start the VM with the new file extension “.vdi”:
shrink

Install Zerofree on your VM:

[root@vmrefdba01 zerofree-1.0.4]# yum install e2fsprogs-devel -y
[root@vmrefdba01 zerofree-1.0.4]# wget http://frippery.org/uml/zerofree-1.0.3.tgz
[root@vmrefdba01 zerofree-1.0.4]# tar -zxf zerofree-1.0.3.tgz
[root@vmrefdba01 zerofree-1.0.4]# cd zerofree-1.0.3
[root@vmrefdba01 zerofree-1.0.3]# make
[root@vmrefdba01 zerofree-1.0.3]# cp zerofree /usr/bin

Stopping all processes Oracle and MySQL for umount the File system:

mysql@vmrefdba01:/home/mysql/ [DUMMY] mysqld_multi stop
oracle@vmrefdba01:/home/oracle/ [WSDBA2] sqh
SQL*Plus: Release 12.1.0.2.0 Production on Thu Jul 28 10:27:29 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> shutdown;
Database closed.
Database dismounted.
ORACLE instance shut down.

Mount the File system on Read Only mode:

[root@vmrefdba01 ~]# mount -o ro /dev/mapper/vgdata-lvdata /oracle/
[root@vmrefdba01 ~]# mount -l
/dev/sda2 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,size=3G)
/dev/sda1 on /boot type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev)
/dev/mapper/vgdata-lvdata on /oracle type ext3 (ro)

Use the utility Zerofree for scans free blocks:

[root@vmrefdba01 ~]# zerofree /dev/mapper/vgdata-lvdata

Shutdown the VM and compact the disk:

[root@vmrefdba01 ~]# shutdown -h now
root@computer:/media/pio/Elements1/Workshop Oracle DBA1 11g/vmreforadba01# vboxmanage modifyhd vmrefdba01.vdi --compact
0%...10%...20%...30%...40%...50%...

Check the size with the new VirtualDisk and you can see.. we have won 20Gb with the shrink space :D

root@computer:/media/pio/Elements1/Workshop Oracle DBA1 11g/vmreforadba01# du -sh *
740K Logs
44M Snapshots
34G vmrefdba01.vdi --> New disk
54G vmrefdba01.vmdk --> Old disk

Optional : If you want you can clone again the VM with the extension “.vmdk”.

root@computer:/media/pio/Elements1/Workshop Oracle DBA1 11g/vmreforadba01# vboxmanage clonehd "vmrefdba01.vdi" "vmrefdba01_v1.vmdk" --format vmdk
0%...10%...20%...30%...40%...

 

Cet article Shrinking Oracle VM VirtualBox with Zerofree est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator