Skip navigation.

Feed aggregator

Snippet : How to query the Sales Cloud users username in groovy and in EL

Angelo Santagata - Thu, 2015-02-19 04:16
Ive decided to create a new category called "Snippets" to capture small chunks of information which might be useful to other people

In Groovy
// Get the security context
def secCtx = adf.context.getSecurityContext()
// Check if user has a given role
if (secCtx.isUserInRole('MyAppRole')) {
  // get the current user's name
  def user = secCtx.getUserName()
  // Do something if user belongs to MyAppRole
}


In a EL Expression
#{securityContext.userName} 

Test MySQL on AWS quickly

Kubilay Çilkara - Thu, 2015-02-19 02:20
Using sysbench to performance test AWS RDS MySQL hardware is an easy three step  operation. Sysbench creates synthetic tests and they are done on a 1 mil row 'sbtest' table that sysbench creates in the MySQL database you indicate. The test doesn't intrude with your database schema, and it doesn't use your data, so it is quite safe. The test is an OLTP test trying to simulate event operations in the database as it runs various, SELECT, INSERT, UPDATE and DELETE requests on it's own 'sbtest' table.The results  of the tests are metrics like transactions per second, number of events, elapsed time etc. See man pages for description and Google it, it is a  popular testing tool. Other things you can set it up to do are to control how many requests (events) you want it to execute in a given time or you can tell it to keep on executing infinite requests until you stop it, or destruction testing. Is a very flexible testing tool with many options including throtling concurrency.You can be up and running with 3 commands on a unix system as follows.  Download sysbench tool (doing this on ubuntu)sudo apt-get install sysbenchCreate a table with 1 mil rowssysbench --test=oltp --oltp-table-size=1000000 --mysql-host={your rds host url} --db-driver=mysql --mysql-user={your rds root user} --mysql-password={password} --mysql-db={your mysql database name} prepareTest with different parameterssysbench --test=oltp --oltp-table-size=1000000 --mysql-host={your rds host url} --db-driver=mysql --mysql-user={your rds root user} --mysql-password={password} --mysql-db={your mysql database name} --max-time=60 --num-threads=550 runWarning: Synthetic  tests will just give you the ability of the hardware at a given standard set of requests and DML operations. There are no way an indication of what will happen to your database if the real workload increases beacause of the applications. Application Load Testing is something else, applications are  complex!  Database Workload is dependent on the application generated workload from real users using the system and is very hard to simulate that in a test. It is not imppossible If you use a database, such as Oracle which has the capability of recording and replaying its production database workload - called Automatic Workload Repository (AWR). In MySQL I couldn't find  so far a way to do this. But sysbench synthetic tests gave me the ability to quickly benchmark and baseline a MySQL database capabilities on different AWS Amazon hardware, something is better than nothing I suppose. 
Categories: DBA Blogs

255 columns

Jonathan Lewis - Wed, 2015-02-18 18:45

You all know that having more than 255 columns in a table is a Bad Thing ™ – and surprisingly you don’t even have to get to 255 to hit the first bad thing about wide tables. If you’ve ever wondered what sorts of problems you can have, here are a few:

  • If you’re still running 10g and gather stats on a table with more than roughly 165 columns then the query Oracle uses to collect the stats will only handle about 165 of them at a time; so you end up doing multiple (possibly sampled) tablescans to gather the stats. The reason why I can’t give you an exact figure for the number of columns is that it depends on the type and nullity of the columns – Oracle knows that some column types are fixed length (e.g. date types, char() types) and if any columns are declared not null then Oracle doesn’t have to worry about counting nulls – so for some of the table columns Oracle will be able to eliminate one or two of the related columns it normally includes in the stats-gathering SQL statement – which means it can gather stats on a few more table columns.  The 165-ish limit doesn’t apply in 11g – though I haven’t checked to see if there’s a larger limit before the same thing happens.
  • If you have more than 255 columns in a row Oracle will split it into multiple row pieces of 255 columns each plus one row piece for “the rest”; but the split counts from the end, so if you have a table with 256 columns the first row-piece has one column the second row-piece has 255 columns. This is bad news for all sorts of operations because Oracle will have to expend extra CPU chasing the the row pieces to make use of any column not in the first row piece. The optimists among you might have expected “the rest” to be in the last row piece. If you want to be reminded how bad row-chaining can get for wide tables, just have a look at an earlier blog note of mine (starting at this comment).
  • A particularly nasty side effect of the row split comes with direct path tablescans – and that’s what Oracle does automatically when the table is large. In many cases all the row pieces for a row will be in the same block; but they might not be, and if a continuation row-piece is in a different block Oracle will do a “db file sequential read” to read that block into the buffer cache and it won’t be cached (see 1st comment below).  As an indication of how badly this can affect performance, the results I got at a recent client site showed “select count(col1) from wide_table” taking 10  minutes while “select count(column40) from wide_table” took 22 minutes because roughly one row in a hundred required a single block read to follow the chain.
  • An important side effect of the split point is that you really need to put the columns you’re going to index near the start of the table to minimise the risk of this row chaining overhead when you create or rebuild an index.
  • On top of everything else, of course, it takes a surprisingly large amount of extra CPU to load a large table if the rows are chained. Another client test reported 140 CPU seconds to load 5M rows of 256 columns, but only 20 CPU seconds to load 255.

If you are going to have tables with more than 255 columns, think very carefully about column order – if you can get all the columns that are almost always null at the end of the row you may get lucky and find that you never need to create a secondary row piece. A recent client had about 290 columns in one table of 16M rows, and 150 columns were null for all 16M rows – unfortunately they had a mandatory “date_inserted” column at the end of the row, but with a little column re-arrangement they eliminated row chaining and saved (more than) 150 bytes storage per row.  Of course, if they have to add and back-fill a non-null column to the table they’re going to have to rebuild the table to insert the column “in the middle”, otherwise all new data will be chained and wasting 150 bytes per row, and any old data that gets updated will suffer a row migration/chain catastrophe.


The most frequently abused feature in APEX

Denes Kubicek - Wed, 2015-02-18 16:02
Today I participated in the Scott's survey:

https://apex.oracle.com/pls/apex/f?p=70347:Q::AAC3

and there was one really interesting question:

"What do you think is most frequently abused? (and why)"

If APEX has any week points than it is definitely the fact that you can place your code almost everywhere. Especially PL/SQL chunks of code. If I start counting and list all the places it will be a very long list:

  • Page Read Only and Cache Page Condition
  • Region Conditional Display and Read Only Condition
  • Buttons Conditional Display
  • Items Conditional Display
  • Items Source value or expression
  • Items Post Calculation Computation
  • Items Default value
  • Items Conditional Display and Read Only Condition
  • Computations
  • Processes on Submit, on Load, on Demand
  • Validations and Validation Conditions
  • ...and yes, Dynamic Actions as my favorite.


  • There is of course more but I will stop counting here. If you start working with APEX this is great - you will easily get on target and have an application up and running in no time. A little bit of code here, a little bit there and there we go.

    This problem becomes obvious if you application is more than just a small and temporary solution shared between a couple of people. As an application grows it will start suffering from performance issues. Furthermore it will be hard to maintain it. It will have a dozens of complex pages with many items, many computations and processes, conditional buttons, validations and dynamic actions. If you practice writing anonymous PL/SQL blocks and you paste those wherever it is possible, your code will become redundant and slow. You will probably repeat the same code many times on your page. This means, your pages will require more processing time than they should. I would even go so far and say this is almost as bad as putting your business logic into triggers. Have you ever had a chance to debug such applications (and you didn't know the code was placed there)?

    The possibility to have so many options is of course great and useful. But it doesn't mean you should use all of the options. The other thing important to know is that you should never write anonymous PL/SQL blocks in your applications. Never! Whenever you need to use PL/SQL, you should use packages and place your code there. The probability that you will then repeat your code is quite low and for sure your application will run much faster.

    My proposal for the future version of APEX would be to have a subtitle for each of those containers saying "Handle with care" and providing some explanations why. I am serious here. Really.

    Categories: Development

    Greenplum is being open sourced

    DBMS2 - Wed, 2015-02-18 15:51

    While I don’t find the Open Data Platform thing very significant, an associated piece of news seems cooler — Pivotal is open sourcing a bunch of software, with Greenplum as the crown jewel. Notes on that start:

    • Greenplum has been an on-again/off-again low-cost player since before its acquisition by EMC, but open source is basically a commitment to having low license cost be permanently on.
    • In most regards, “free like beer” is what’s important here, not “free like speech”. I doubt non-Pivotal employees are going to do much hacking on the long-closed Greenplum code base.
    • That said, Greenplum forked PostgreSQL a long time ago, and the general PostgreSQL community might gain ideas from some of the work Greenplum has done.
    • The only other bit of newly open-sourced stuff I find interesting is HAWQ. Redis was already open source, and I’ve never been persuaded to care about GemFire.

    Greenplum, let us recall, is a pretty decent MPP (Massively Parallel Processing) analytic RDBMS. Various aspects of it were oversold at various times, and I’ve never heard that they actually licked concurrency. But Greenplum has long had good SQL coverage and petabyte-scale deployments and a columnar option and some in-database analytics and so on; i.e., it’s legit. When somebody asks me about open source analytic RDBMS to consider, I expect Greenplum to consistently be on the short list.

    Further, the low-cost alternatives for analytic RDBMS are adding up.

    • Amazon Redshift has considerable traction.
    • Hadoop (even just with Hive) has offloaded a lot of ELT (Extract/Load/Transform) from analytic RDBMS such as Teradata.
    • Now Greenplum is in the mix as well.

    For many analytic RDBMS use cases, at least one of those three will be an appealing possibility.

    By no means do I want to suggest those are the only alternatives.

    • Smaller-vendor offerings, such as CitusDB or Infobright, may well be competitive too.
    • Larger vendors can always slash price in specific deals.
    • MonetDB is still around.

    But the three possibilities I cited first should suffice as proof for almost all enterprises that, for most use cases not requiring high concurrency, analytic RDBMS need not cost an arm and a leg.

    Related link

    Categories: Other

    Hadoop: And then there were three

    DBMS2 - Wed, 2015-02-18 15:50

    Hortonworks, IBM, EMC Pivotal and others have announced a project called “Open Data Platform” to do … well, I’m not exactly sure what. Mainly, it sounds like:

    • An attempt to minimize the importance of any technical advantages Cloudera or MapR might have.
    • A face-saving way to admit that IBM’s and Pivotal’s insistence on having their own Hadoop distributions has been silly.
    • An excuse for press releases.
    • A source of an extra logo graphic to put on marketing slides.

    Edit: Now there’s a press report saying explicitly that Hortonworks is taking over Pivotal’s Hadoop distro customers (which basically would mean taking over the support contracts and then working to migrate them to Hortonworks’ distro).

    The claim is being made that this announcement solves some kind of problem about developing to multiple versions of the Hadoop platform, but to my knowledge that’s a problem rarely encountered in real life. When you already have a multi-enterprise open source community agreeing on APIs (Application Programming interfaces), what API inconsistency remains for a vendor consortium to painstakingly resolve?

    Anyhow, it now seems clear that if you want to use a Hadoop distribution, there are three main choices:

    • Cloudera’s flavor, whether as software (from Cloudera) or in an appliance (e.g. from Oracle).
    • MapR’s flavor, as software from MapR.
    • Hortonworks’ flavor, from a number of vendors, including Hortonworks, IBM, Pivotal, Teradata et al.

    In saying that, I’m glossing over a few points, such as:

    • There are various remote services that run Hadoop, most famously Amazon’s Elastic MapReduce.
    • You could get Apache Hadoop directly, rather than using the free or paid versions of a vendor distro. But why would you make that choice, unless you’re an internet bad-ass on the level of Facebook, or at least think that you are?
    • There will surely always be some proprietary stuff mixed into, for example, IBM’s BigInsights, so as to preserve at least the perception of all-important vendor lock-in.

    But the main point stands — big computer companies, such as IBM, EMC (Pivotal) and previously Intel, are figuring out that they can’t bigfoot something that started out as an elephant — stuffed or otherwise — in the first place.

    If you think I’m not taking this whole ODP thing very seriously, you’re right.

    Related links

    • It’s a bit eyebrow-raising to see Mike Olson take a “more open source than thou” stance about something, but basically his post about this news is spot-on.
    • My take on Hadoop distributions two years ago might offer context. Trivia question: What’s the connection between the song that begins that post and the joke that ends it?
    Categories: Other

    PeopleTools 8.54: Materialized Views

    David Kurtz - Wed, 2015-02-18 15:30
    This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
    Materialized Views in the DatabaseSnapshots were introduced in Oracle 7.  They were a way of building and maintaining a physical table to hold the results of a SQL query.  We are well used to a view being the results of a query, but here the results are materialised into a physical table.  Hence the feature was renamed materialized views in Oracle 8i.

    Today materialized views are one among many replication technologies.  They are available in standard edition Oracle, and there is no licensing implication in their use.

    Materialized views can generally be put into two categories
    • A simple, single-table materialized view. 
      • Often used to replicate data across a database link from another database.  Or to produce a subset of the data.
      • Can be refreshed incrementally using a PL/SQL package supplied by Oracle
        •  A materialized view log is created on the source table to record all the changes made to the source table.  It holds the primary key of the changed row, or the rowid (physical address of the row).  It can optionally hold additional columns.  It is populated by a database trigger on the source table (since Oracle 8i that trigger is hidden).
    • Multi-table materialized view
      • Usually done within a single database rather than across a database link.
      • Can only be refreshed by being completely rebuilt. 
      • If the same query as is used in a materialized view is submitted, Oracle can rewrite the query to use the materialized view instead.  Query rewrite only occurs subject to configuration and certain pre-requisites being met.
    Materialized Views can be populated immediately when they are built, or later on demand.  They can be refreshed on demand, on a regular schedule by a database job, or immediately when an update to a source table is committed.  Materialized views can be put into Refresh Groups.  All the materialized views in a refresh group are refreshed in the same database transaction so the data in the materialized views is consistent.

    Materialized views can be updatable and used for bidirectional replication.  I am not going to talk that here.

    When you introduce materialized views into an application you need to consider what you are trying to achieve, and make design decisions accordingly.
    Materialized Views in PeopleTools 8.54Using this new feature in PeopleSoft is deceptively easy, but quite a lot is going on behind the scenes.
    PeopleSoft Documentation (the term PeopleBooks seems to have been replaced by PeopleSoft Online Help): Data Management, Using Materialized Views provides an overview.

    There are two new PeopleTools tables:
    • PSPTMATVWDEFN - addition definition fields for the materialized view, build, refresh, staleness, stats.  Doesn't contain the query, that is in PSSQLTEXTDEFN as it is for all other views.
    • PSPTMATVWDEP - lists tables upon which materialized view depends.  PeopleSoft seems to work this out for itself by parsing the SQL query.
    I am going to demonstrate some aspects of the feature by running through some examples.
    Example 1: Replicate part of PS_JOB across a database linkIn this example I am going to use a materialized view to replicate a table from another Oracle database across a database link.
    If I select SQL View the Materialized View check box appears, if I check the checkbox the Materialized View Options appear.

    This is build script generated by Application Designer
    DROP VIEW PS_DMK
    /
    CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
    SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
    ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
    EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    However, if the materialized view already exists, the script will drop it, recreate and drop the view, and then recreate the materialized view.
    DROP MATERIALIZED VIEW PS_DMK
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
    SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH FAST
    ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
    EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
    (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
    STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
    = 'DMK'
    /
    • The Application Designer build script creates the materialized view using a primary key based replication.  If there is no WITH PRIMARY KEY clause specified, because it is the default.  There appears to be no way to get Application Designer to generate a WITH ROWID clause, so it is not possible to replicate a single table without a unique key.  You might question whether that is useful, but it is possible in Oracle.
    • If there is no primary key on the source table, you will to add one.  If this is on another system, or a pre-8.58 PeopleSoft system you will need to do this manually.  Otherwise you will get this error message:
    ERROR at line 4:
    ORA-12014: table 'PS_JOB' does not contain a primary key constraint
    • If you specify any key columns on the materialized view it does not result in the usual indexes that you get on tables.  Also, it is not possible to add additional user specified indexes to the materialized view - the option is greyed out.  This is rather disappointing, because you might want to do exactly that so you can query the materialized view it in different ways to the underlying table.
      • You will get a unique index on a materialized view that is replicated by primary key, because the primary key will be inherited from the underlying table.
    • Nor is it possible to specify partitioning in Application Designer on a materialized view.
    • You can specify storage options on the materialized view via Record DDL, but the storage options do not appear in the CREATE MATERIALIZED VIEW statement in the build script.  This is rather disappointing because you don't need to provide free space for updates in a materialized view which is completely refreshed each time, but you might if you do incremental update.
    Example 2: Replicate part of PS_JOB locallyIn this example, I am again only replicating 6 named columns into my materialized view.
    DROP VIEW PS_DMK
    /
    CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
    /
    DROP MATERIALIZED VIEW LOG ON PS_JOB
    /
    DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
    'PS_JOB'
    /
    ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
    , EMPLID, EMPL_RCD)
    /
    CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
    KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
    IMMEDIATE
    /
    INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
    'PS_JOB')
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    I don't know why it rebuilds the non-materialized view as a normal view and drops the primary key constraint each time every time but it does.  You might not want to do this every time for a large materialized view that takes time to build.
    If the materialized view log has been built, next time you generate the view build script it creates and drop the view and then builds the materialized view.
    CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB
    /
    DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
    'PS_JOB'
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
    /
    DROP MATERIALIZED VIEW LOG ON PS_JOB
    /
    DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
    'PS_JOB'
    /
    ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
    , EMPLID, EMPL_RCD)
    /
    CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
    KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
    IMMEDIATE
    /
    INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
    'PS_JOB')
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
    SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
    ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
    EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
    (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
    STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
    = 'DMK'
    /
    • It is rather odd to see a build script update a PeopleTools table. Application Designer also updates the PSPTMATVWDEFN table itself every time it generates the build script.  Note that the script doesn't issue an explicit commit, so if you execute the build script in SQL*Plus remember to commit to release the row level lock on PSPTMATVWDEFN.  
    • Application Designer flip-flops between these two build scripts that will repeatedly drop and create the materialized view and materialized view log.  Unless you are very careful you might not know whether you have the objects in the desired state.
    • The materialized view and materialized view log are always created in tablespace PSMATVW.  This is a new tablespace delivered in the standard tablespace script.  It is not possible to set the tablespace to something else as for a normal table because it is a view.  This is unfortunately because, I might not want all my materialized views in the same tablespace.
    • Even though the materialized view is replicated by primary key, the materialized view log also contains the rowid and the supplementary columns.  This is overkill.  The materialized view log as built be application designer contains every length-bounded column in the source table.  This can significantly increase the overhead of the materialized view log which is maintained as other process update the source table.
    SQL> desc mlog$_ps_job
    Name Null? Type
    ----------------------------- -------- -------------------
    EFFDT DATE
    EFFSEQ NUMBER
    EMPLID VARCHAR2(11 CHAR)
    EMPL_RCD NUMBER
    DEPTID VARCHAR2(10 CHAR)
    SETID_DEPT VARCHAR2(5 CHAR)
    M_ROW$$ VARCHAR2(255 CHAR)
    SEQUENCE$$ NUMBER
    SNAPTIME$$ DATE
    DMLTYPE$$ VARCHAR2(1 CHAR)
    OLD_NEW$$ VARCHAR2(1 CHAR)
    CHANGE_VECTOR$$ RAW(255)
    XID$$ NUMBER
      • If I just created the materialized view log as follows
    CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW 
    WITH PRIMARY KEY
    --, ROWID, SEQUENCE(DEPTID, SETID_DEPT)
    INCLUDING NEW VALUES PURGE IMMEDIATE
    /
      • then the materialized view log contains fewer columns
    Name                          Null?    Type
    ----------------------------- -------- --------------------
    EFFDT DATE
    EFFSEQ NUMBER
    EMPLID VARCHAR2(11 CHAR)
    EMPL_RCD NUMBER
    SNAPTIME$$ DATE
    DMLTYPE$$ VARCHAR2(1 CHAR)
    OLD_NEW$$ VARCHAR2(1 CHAR)
    CHANGE_VECTOR$$ RAW(255)
    XID$$ NUMBER
    • The materialized view inherits the primary key from the source table because it is a single table materialized view replicated using the primary key.  Therefore there is also a unique index on this materialised view.
    SELECT constraint_name, constraint_type, table_name, index_name
    FROM user_constraints
    WHERE table_name = 'PS_DMK'
    AND constraint_type != 'C'
    /

    CONSTRAINT_NAME C TABLE_NAME INDEX_NAME
    -------------------- - ---------- ----------
    PS_JOB_PK1 P PS_DMK PS_JOB_PK1

    SELECT index_name, index_type, uniqueness
    FROM user_indexes
    WHERE table_name = 'PS_DMK'
    /

    INDEX_NAME INDEX_TYPE UNIQUENES
    ---------- ---------- ---------
    PS_JOB_PK1 NORMAL UNIQUE

    SELECT index_name, column_position, column_name, descend
    FROM user_ind_columns
    WHERE table_name = 'PS_DMK'
    /

    INDEX_NAME COLUMN_POSITION COLUMN_NAME DESC
    ---------- --------------- -------------------- ----
    PS_JOB_PK1 1 EFFDT ASC
    PS_JOB_PK1 2 EFFSEQ ASC
    PS_JOB_PK1 3 EMPLID ASC
    PS_JOB_PK1 4 EMPL_RCD ASC




  • The build script clears out the MV_CAPABILITIES_TABLE when it drops the materialized view.  This table is used to hold the output from DBMS_MVIEW.EXPLAIN_MVIEW (see Oracle Database Data Warehousing Guide - ), which Application Designer executes when the materialized view record is saved.

  • Example 3:DMK_DPT_SEC_MVW is a materialised view that is cloned from security view DEPT_SEC_SRCH.This view references various tables (I have edited out column lists and predicates for readability)

    SELECT …
    FROM PS_DEPT_TBL DEPT
    , PSOPRDEFN OPR
    WHERE EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)
    OR EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)
    OR EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)
    But only 4 views appear in PSPTMATVWDEP.  PS_SJT_DEPT was not added.
    SELECT * FROM psptmatvwdep WHERE recname = 'DMK_DPT_SEC_MVW'
    /

    RECNAME PTMAT_BASETBL
    --------------- ------------------
    DMK_DPT_SEC_MVW PSOPRDEFN
    DMK_DPT_SEC_MVW PS_DEPT_TBL
    DMK_DPT_SEC_MVW PS_SJT_CLASS_ALL
    DMK_DPT_SEC_MVW PS_SJT_OPR_CLS
    I think this is because it tried and failed to add primary key constraint and materialized view log to PS_SJT_DEPT because it has a 'duplicate key' defined in Application Designer.  The following errors are found in the build log even if the build script is not executed.
    ALTER TABLE PS_SJT_DEPT ADD CONSTRAINT PS_SJT_DEPT_PK PRIMARY KEY 
    (SCRTY_KEY1, SCRTY_KEY2, SCRTY_KEY3, SCRTY_TYPE_CD, SETID)
    Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
    - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated

    CREATE MATERIALIZED VIEW LOG ON PS_SJT_DEPT TABLESPACE PSMATVW
    WITH PRIMARY KEY, ROWID, SEQUENCE (DEPTID, EFFDT_NOKEY)
    INCLUDING NEW VALUES PURGE IMMEDIATE
    Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
    - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated
    Application Designer worked out that PS_SJT_DEPT was referenced in the materialized view query, but it didn't check that the table does not has a unique key defined in PeopleTools.

    We didn't get as far as creating the materialized view.  However, Application Designer passed the create Materialized View command to the EXPLAIN_MVIEW function in order to populate
    EXECUTE DBMS_MVIEW.EXPLAIN_MVIEW (q'[CREATE MATERIALIZED VIEW PS_DMK_DPT_SEC_MVW (SETID, OPRID, DEPTID, DESCR
    , DESCRSHORT, SETID_LOCATION, LOCATION, MANAGER_ID, COMPANY, USE_BUDGETS, USE_ENCUMBRANCES)
    TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST ON DEMAND AS
    SELECT DEPT.SETID, OPR.OPRID, DEPT.DEPTID , DEPT.DESCR , DEPT.DESCRSHORT , DEPT.SETID_LOCATION
    , DEPT.LOCATION , DEPT.MANAGER_ID , DEPT.COMPANY , DEPT.USE_BUDGETS , DEPT.USE_ENCUMBRANCES
    FROM PS_DEPT_TBL DEPT , PSOPRDEFN OPR
    WHERE EXISTS (
    SELECT 'X' FROM PS_SJT_DEPT SEC , PS_SJT_CLASS_ALL CLS , PS_SJT_OPR_CLS SOC
    WHERE SEC.SETID = DEPT.SETID AND SEC.DEPTID = DEPT.DEPTID AND SEC.EFFDT_NOKEY = DEPT.EFFDT
    AND CLS.SCRTY_SET_CD = 'PPLJOB' AND CLS.SCRTY_TYPE_CD = '001' AND CLS.TREE = 'Y'
    AND CLS.SCRTY_KEY1 = SEC.SCRTY_KEY1 AND CLS.SCRTY_KEY2 = SEC.SCRTY_KEY2
    AND CLS.SCRTY_KEY3 = SEC.SCRTY_KEY3 AND SOC.OPRID
    Example 4: DMK_JOB_CUR_MVW is a materialized view cloned from JOB_CURR_ALL_VWIn this case I will try to create a materialized view on a complex query, but this time the underlying table has a unique key.  When I try to build the materialized view I get the following entries in the error log.  These warnings were obtained from the entries in MV_CAPABILITIES_TABLE which was populated by an attempt to describe the query.
    SQL Build process began on 16/02/2015 at 21:05:30 for database HR92U011. 
    Error: Cannot create Materialized View on record DMK_JOB_CUR_MVW.
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_COMPLETE| Y | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST| N | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE| N | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | aggregate function in mv | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | multiple instances of the same table or view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ONETAB_DML| N | see the reason why REFRESH_FAST_AFTER_INSERT is disabled | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ANY_DML| N | see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | materialized view cannot support any type of query rewrite | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | subquery present in the WHERE clause | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | materialized view cannot support any type of query rewrite | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | query rewrite is disabled on the materialized view | |

    SQL Build process ended on 16/02/2015 at 21:05:30.
    1 records processed, 1 errors, 15 warnings.
    SQL Build script for all processes written to file C:\Temp\PSBUILD.SQL.
    SQL executed online.
    SQL Build log file written to C:\Temp\PSBUILD.LOG.
    • So, Application Designer does try to prevent you from creating materialized views that Oracle won't manage, but the messages back are a little obscure.
    • If I change the refresh mode to Complete, Application Designer does not create materialized view logs. 
    CREATE MATERIALIZED VIEW PS_DMK_JOB_CUR_MVW (EMPLID, EMPL_RCD,
    ACTION_DT, BUSINESS_UNIT, EMPL_STATUS, HR_STATUS, DEPTID, JOBCODE,
    LOCATION, POSITION_NBR, ACTION, ACTION_REASON, COMP_FREQUENCY,
    COMPRATE, CURRENCY_CD, SAL_ADMIN_PLAN, GRADE, COMPANY, PAY_SYSTEM_FLG
    , PAYGROUP, REG_TEMP, FULL_PART_TIME, SETID_DEPT, SETID_JOBCODE,
    SETID_LOCATION, PER_ORG) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH
    COMPLETE ON DEMAND AS SELECT A.EMPLID ,A.EMPL_RCD ,A.EFFDT
    ,A.BUSINESS_UNIT ,A.EMPL_STATUS ,A.HR_STATUS ,A.DEPTID ,A.JOBCODE
    ,A.LOCATION ,A.POSITION_NBR ,A.ACTION ,A.ACTION_REASON
    ,A.COMP_FREQUENCY ,A.COMPRATE ,A.CURRENCY_CD ,A.SAL_ADMIN_PLAN
    ,A.GRADE ,A.COMPANY ,A.PAY_SYSTEM_FLG ,A.PAYGROUP ,A.REG_TEMP
    ,A.FULL_PART_TIME ,A.SETID_DEPT ,A.SETID_JOBCODE ,A.SETID_LOCATION
    ,A.PER_ORG FROM PS_JOB A WHERE A.EFFDT = ( SELECT MAX (C.EFFDT) FROM
    PS_JOB C WHERE C.EMPLID = A.EMPLID AND C.EMPL_RCD = A.EMPL_RCD AND
    ((C.EFFDT <= TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD')) OR
    (A.EFFDT > TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') AND
    TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') < ( SELECT
    MIN(J2.EFFDT) FROM PS_JOB J2 WHERE J2.EMPLID = A.EMPLID AND
    J2.EMPL_RCD = A.EMPL_RCD) ) )) AND A.EFFSEQ = ( SELECT MAX(D.EFFSEQ)
    FROM PS_JOB D WHERE D.EMPLID = A.EMPLID AND D.EMPL_RCD = A.EMPL_RCD
    AND D.EFFDT = A.EFFDT)
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
    (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
    STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK_JOB_CUR_MVW')
    WHERE RECNAME = 'DMK_JOB_CUR_MVW'
    /
    • Also, It doesn't create a primary key constraint on either the underlying table or the materialized view.  So this materialized view doesn't have any indexes.
    Query ReWriteOne common use of complex materialized views is to allow the optimizer to rewrite the query to use the materialized view when it sees the same query as was used to create the materialized view.  Optionally the optimizer will also check that the view is up to date.  I have added the enable query rewrite clause.
    DROP MATERIALIZED VIEW PS_DMK_PER_DEP_MVW
    /
    CREATE MATERIALIZED VIEW PS_DMK_PER_DEP_MVW (SETID_DEPT, DEPTID, EFFDT
    , DESCR) TABLESPACE PSMATVW
    BUILD IMMEDIATE REFRESH COMPLETE ON DEMAND
    enable query rewrite
    AS SELECT A.SETID ,A.DEPTID ,A.EFFDT ,A.DESCR FROM PS_DEPT_TBL A
    WHERE A.EFFDT= ( SELECT MAX(B.EFFDT) FROM PS_DEPT_TBL B WHERE A.SETID
    =B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
    ,'YYYY-MM-DD'),'YYYY-MM-DD'))
    /
    However, expressions - in this case one generated to determine the current effective-dated department - are not supported for query write.
    =B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
    *
    ERROR at line 7:
    ORA-30353: expression not supported for query rewrite
    This could make it very difficult to use the feature in PeopleSoft. If you want to use the materialized view you are likely to have to reference it explicitly in the code.
    Refreshing materialized ViewsThere is a new component to manage the refresh frequency of materialized views.
    PeopleTools-> Utilities-> Administration -> Oracle Materialized Views -> Materialized View MaintenanceThis component will schedule an Application Engine process called PTMATREFVW.What this Application Engine actually does is to specify the refresh frequency for the materialized view
    &AlterSQL = "alter materialized view " | &mview_name 
    | " REFRESH NEXT SYSDATE + (" | &MatRecords.PTMAT_REFINT.Value | "/86400)";
    So the command issued is just
    alter materialized view PS_DMK REFRESH  NEXT SYSDATE + (4242/86400)";
    Effectively, for each materialized view, this creates a refresh group and a database job that refreshes it.
    SELECT rname, next_date, interval FROM user_refresh
    /

    RNAME NEXT_DATE INTERVAL
    ---------- --------- -------------------------
    PS_DMK 24-JAN-15 SYSDATE + (4242/86400)

    SELECT name, type, rname, job, next_date, interval FROM user_refresh_children
    /

    NAME TYPE RNAME JOB NEXT_DATE INTERVAL
    ---------- ---------- ---------- ---------- --------- -------------------------
    PS_DMK SNAPSHOT PS_DMK 21 24-JAN-15 SYSDATE + (4242/86400)

    SELECT job, next_date, next_Sec, interval, what FROM dba_jobs
    /

    JOB NEXT_DATE NEXT_SEC INTERVAL
    ---------- --------- -------- -------------------------
    WHAT
    --------------------------------------------------
    21 24-JAN-15 11:48:52 SYSDATE + (4242/86400)
    dbms_refresh.refresh('"SYSADM"."PS_DMK"');
    • But I might want to group related materialized views together into a single refresh group.
    • I might want to refresh the job at a particular time, which can be done with a more sophisticated function in the interval. 
    • I might prefer to refresh a materialized view at a particular point in a batch schedule.  So I might prefer to code that into an application engine, or have the application engine submit a job that only fires once and does resubmit (depending on whether I want to wait for the refresh).
    My Recommendations
    • Good Things
      • The removal of the descending indexes and the creation of the primary key is a good thing
      • It is useful to be able to define the materialized view query in PeopleTools along with the rest of the applicaiton.
      • The use of EXPLAIN_MVIEW to test the validity of the materialized view and to populate MV_CAPABILITIES_TABLE is clever, but the messages are obscure and should be better documented.
    • Bad Things
      • No checking that a source table in the local database doesn't have a unique key that will support a primary key.
      • I can't build indexes on materialized view.  Although, the primary key will be inherited automatically on single table materialized views.  So you will have to handle that manually outside PeopleTools.
      • There is no support for rowid based materialized views.
      • The materialized view logs created by Application Designer are totally overblown - there is far too much data being logged.  They should be either primark key or rowid (if primary key is not possible), but not both.  I cannot see the point of the additional columns.  I think they are a waste of resource.
      • The flip-flopping of the build script is confusing; you will never be completely sure what you have in the database. 
      • The script dropping the materialized view unnecessarily, which will drop any indexes that you have created manually!
    • I think some of the problems stem from trying to graft materialized views onto the existing view record type, instead of creating a new record type and building it into Application Designer properly and giving it the attributes of both a table and a view.
    • There is not enough control over when a materialized view is refreshed.  Just a time interval is not good enough.  In most systems, you need better control.
    • It is clearly going to be difficult getting database query rewrite to work with complex materialized views in PeopleSoft, especially if effective-date logic is required.  However, that is a reflection on the application code rather than the way support for materialized views has been implemented.
     When you design materialized views into your application you need to make careful choices about
    • Refresh method
      • Complete - pretty much mandatory for multi-table views, but there are some exceptions to this rule described in the Oracle database documentation.
      • Fast - only for single table - rarely used within a single database - more commonly used for moving data between databases.  In which case, you need database links and the materialized view log goes onto the source database and the materialized view is created on the target.
    • Refresh mode
      • On commit - this is potentially dangerous because it could happen too often and there won't be a saving.  Possible exception being when the underlying table is only ever updated by a batch process
      • On demand - manually issue refresh at a specific point in a batch
      • On schedule by a database job.
    • Build options
      • Immediate - the materialized view is populated when it is created
      • Deferred - the materialized view is not poplulated when it is created, and will have to be completely refreshed at some point in the future.
    Choice of refresh method and mode is often a function question.  How stale can the materialized view be allowed to be, especially if you can't get query rewrite to work.  Thoughtful design is required.  I have seen systems severely degraded by the query and redo overhead of excessive refresh. 
    PeopleTools support of materialized views certainly has some good things, but as it stands it is of limited use when it still leaves you with a lot of manual administration to do. 

    In most systems it is the DBAs who will have to manage the materialized views.  They are generally resistant to using PeopleSoft specific tools to do database administration.  That is going to be even more challenging when only a part of the job can be done in PeopleSoft.©David Kurtz, Go-Faster Consultancy Ltd.

    Alert Function

    Denes Kubicek - Wed, 2015-02-18 10:51
    In APEX you will probably use javascript alert function in many cases by saying:

    alert ('some message');

    The popup box you get looks a bit strange and outdated. As an alternative to this you may use this function call:

    function get_alert (p_message,p_id) {$("<div/>", { "html":p_message}).attr({"title":"Error has occurred!"}).dialog({
    modal:true,
    buttons:{"Go to Error":function(){$(this).dialog("close");
    $(p_id).select();}}
    }); }


    As you can see in this example, you can add additional functions to this alert box in order to do something after closing of the alert message. You can choose to display the alert as a modal window as well.



    Enjoy.
    Categories: Development

    Can you detect the malware on your machines?

    Chris Foot - Wed, 2015-02-18 08:43

    There's nothing worse than having a nefarious digital worm weaving in and out of your servers, stealing information from enterprise applications and databases under the guise of a legitimate file.

    An open source mentality
    Unfortunately, malware is engineered much like how open source software is: A community of developers (in this case, hackers) are given access to a machine where a program can be continuously updated and refined.

    Some would argue that malware construction is a process capable of accessing more resources, as cybercriminals view such programs as their bread and butter. Sure OSS developers make money, but the solutions they create aren't their direct sources of income.

    Bottom line: Malware is sophisticated. Thankfully, there are services and monitoring solutions capable of detecting such programs, providing experts with the insight to eradicate it.

    Why is malware so pesky?
    There's a reason why malware has been the bane of many operations (see our post about malicious programs that target banks.) According to InfoWorld contributor Roger Grimes, one of the reasons why common computer and server surveillance software fails to identify malware is because engineers in the hacker community regarded as professionals write these viruses to self-update whenever they're detected. Grimes maintained that "no [antivirus] product is 100 percent accurate. No product is going to be superaccurate over the course of a year."

    With this perspective considered, Grimes highly advised systems administrators to ensure their OSs are fully patched. Given the limited resources and time sysadmins have, hiring a team of outside experts may be advisable. Being able to access the expertise of professionals who constantly stay in the know about new Linux and Windows server bugs is invaluable.

    A helpful resource
    While not the only antimalware program available for Linux, Maldet is one  that is particularly useful for detecting php backdoors, darkmailers and a list of other malicious files that can tarnish a website, according to Tecmint. The program can run on a number of Linux distros such as RHEL 7.0, CentOS and Fedora version 21 through 12. There are plenty of self-help guides that can provide Linux users with the knowledge they need to install and configure Maldet based on their organization's needs. The antimalware solution enables system administrators to take advantage of email alerts, quarantine options, scan options, statistical analysis and monitoring capabilities.

    Again, Maldet is just one out of many malware programs. If you're unsure as to which one is a good fit for your operations, it may be best to consult a group of experts on the matter.

    About RDX
    Since its inception in 1994, RDX has helped hundreds of organizations lower database administration support costs while increasing performance and availability. RDX provides 100 percent US-based, 24×7 support for Oracle, Oracle EBS, SQL Server, PostgreSQL, MySQL, and DB2 databases as well as operating system support for all major Windows, Unix and Linux offerings.

    RDX's highest priority is to safeguard its customers' sensitive data stores, and its expert staff of highly-trained professionals is backed by a monitoring and support infrastructure that has been continuously improved and enhanced throughout its 20 year history.

    The post Can you detect the malware on your machines? appeared first on Remote DBA Experts.

    RHEL 7.1 gains Power8 processor support [VIDEO]

    Chris Foot - Wed, 2015-02-18 08:34

    Transcript

    Hi, welcome to RDX! Red Hat Enterprise Linux 7.1 may still be in beta, but some of its features are definitely worth noting.

    V3 noted that IBM has made the Power8 version of RHEL 7.1 available via its Power Development Platform. Support for Power8 was implemented through the little endian instruction format. System administrators can download 7.1 for testing purposes.

    The shift to little endian mode is essential, because it opens IBM’s Power platform to a wider Linux ecosystem. This move ultimately adds value to IBM’s Power Systems and z Systems mainframe businesses.
    Wondering how this technology could affect your systems? Consult a team of Linux experts to determine whether migration from the x86 architecture to Power Systems is right for you.

    Thanks for watching!

    The post RHEL 7.1 gains Power8 processor support [VIDEO] appeared first on Remote DBA Experts.

    New Community Rewards and Recognition Program: Building Better Content and Engagement

    Joshua Solomin - Wed, 2015-02-18 08:06
    Community Rewards and Recognition in 15.1 .mainContainer { max-width:680px; min-width:320px; margin:1px auto; font-family:Arial,Helvetica,sans-serif; } .mainContainer p { padding: 0; } .mainContainer a { color: #ff0000; text-decoration: underline; } .mainContainer td { padding:5px; border-collapse:collapse; font-family:Arial,Helvetica,sans-serif; } New Community Rewards and Recognition Program:
    Building Better Content and Engagement The Leaders

    Click the image for more details

    From a simple leaderboard written on a whiteboard to the sophisticated stats tracking of Oracle Fusion CRM, we are surrounded daily by "gamification" concepts.

    In competitive games and sports, comparing stats against opponents and peers is all part of the fun. Organized chess play has long had an intricate rankings systems based on match performance. And how many of you are right now slipping in a quick peek at Words With Friends or Clash of Clans? (Tip: don't answer that.)

    Extra helpful in MOSC
    Click the image to learn more! --> Gamification in Business

    "Gamification" has been something of a corporate buzzword for several years now. At its simplest it is a set of management tools designed to encourage employee and customer behaviors that add business value—but do it in a way that feels natural, intuitive, and fun.

    It integrates the dynamics of games—scorekeeping, reward feedback, missions and goals—to an existing process or system by motivating member participation, engagement and loyalty.

    Oracle Community - 15.1 Rewards and Recognition Update

    The Oracle Community platform uses a gamification system designed to:

    • Broaden scope of knowledge (breadth and depth)
    • Encourage participation by rewarding users for completing mission-based goals and objectives
    • Recognize users when they add quality content
    • Make it easier for other participants to find and evaluate highly rated content
    The New Program

    The 15.1 release enhanced the existing system by adding new user "levels," visual perks, badges, and achievements. It gives participants a more flexible, fun way to share knowledge and work within in the community.

    Benefits Gamification Principles Learn More

    Great support communities derive the most value from the contributions of its users. The enhanced Rewards and Recognition program, makes it easier to recognize quality contributions and increases the value of the community for all involved.

    If you're an Oracle customer or employee, we highly recommend checking out the new program.

    Resources Engage the Community brightcove.createExperiences(); Extra helpful in MOSC --> brightcove.createExperiences(); Comments and Feedback

    We'd love to hear from you about the new program!

    If you're an Oracle customer, give us a heads up in the Community discussion thread.

    If you're an Oracle employee, make your voice heard in the MOS Community employee feedback site, with the category: Gamification.


    -The Oracle Community Team

    SQL Server monitoring with nagios: utilisation

    Yann Neuhaus - Wed, 2015-02-18 07:14

    We saw in my last blog how to install the SQL Server plugin for Nagios.
    In this new blog, I will explain:

    • how to create a wrapper to avoid the unset of the PERLLIB variable
    • how to create and configure a configuration file by monitoring instance
    • how to create an SQL Login for nagios
    • How to subscribe to an alert

    As we saw in the first nagios blog, to be able to run the check mssql plugin we have to unset the PERLLIB variable and to export library.
    We unset the PERLLIB libraries because there are set with the Oracle Client and there is conflicts. We force the plugin check_mssql_health to use the PERLLIB libraries of the OS instead of these of the Oracle client by unsetting the PERLLIB libraries of the check_oracle_health plugin.
    We will do that in a wrapped file:

     b2ap3_thumbnail_Nagios_SQLServer_blog_correction1.jpg

    After we are able to test our wrapper with success:

     b2ap3_thumbnail_Nagios_SQLServer_nagios_b2.jpg

    We have decided to create a configuration file per SQL Server instance to monitor.
    This file is as follow:

     b2ap3_thumbnail_Nagios_SQLServer_nagios_b3.jpg

    The important things to see here is:

    In the host definition line:

    • mssql_sid with the name of the instance given in the freetds.conf file (see first nagios blog)
    • mssql_usr with the user name used for the SQL Server connection
    • address with the IP Address of the SQL Server host

    In the command line:

    • the call to the check_mssql_health_wrapped file
    • the counter used in this call, here cpu-busy
    • the warning threshold fixed at 80% of cpu used
    • the alert threshold fixed to 90% of cpu used

    In the service line:

    • normal_check_interval which defines the interval to check the service under normal conditions, here 10 minutes
    • retry_check_interval which determines the number of minutes to wait before scheduling a re-check when service has changed to non-OK state, here 2 minutes
    • max_check_attempts which checks if the service has been retried max_check_attempts time without a change in its status, it will revert to being scheduled at normal_check_interval rate and a mail is sent to the contact_group, here 3 times
    • contact_groups which will receive alerts
    • notification_interval which determines every how many minutes alerts will be send

    It means that the cpu-busy counter will be check every 10 minutes by nagios, in case of non-OK state the next check will be scheduled after 2 minutes and after 3 checks without change, an alert will be send to the contact group and another check will be scheduled 10 minutes later.  If the status stays non-ok a next alert will be sent after 6 hours.

    A lots of counters are available for this plugin, to have a list, please go here.

     

    An SQL Login can be created to connect to the SQL Server instance and databases to avoid to use sa user for example.
    First create a windows domain account:

     b2ap3_thumbnail_Nagios_SQLServer_Account1.jpg

    b2ap3_thumbnail_Nagios_SQLServer_Account2.jpgb2ap3_thumbnail_Nagios_SQLServer_Account2.jpg

    Execute the following script in SSMS in a query window to create the new logins and grand permissions:

    USE [master]
    GO
    CREATE LOGIN [DBITESTnagios_mssql_health] FROM WINDOWS WITH DEFAULT_DATABASE=[master]
    GO
    GRANT VIEW SERVER STATE to [DBITESTnagios_mssql_health]

    USE [model]
    GO
    CREATE ROLE db_nagios_mssql_health
    GRANT EXECUTE TO db_nagios_mssql_health
    GRANT VIEW DATABASE STATE TO db_nagios_mssql_health
    GRANT VIEW DEFINITION TO db_nagios_mssql_health
    CREATE USER [DBITESTnagios_mssql_health] FOR LOGIN [DBITESTnagios_mssql_health]
    EXEC sp_addrolemember'db_nagios_mssql_health', [DBITESTnagios_mssql_health]

    After execute the following script to grand permission on database level, take care if a new database is installed later this script has to be run for this new db.

    execute sp_MSforeachdb 'use [?]
    print ''?''
    USE [?]
    CREATE ROLE db_nagios_mssql_health
    GRANT EXECUTE TO db_nagios_mssql_health
    GRANT VIEW DATABASE STATE TO db_nagios_mssql_health
    GRANT VIEW DEFINITION TO db_nagios_mssql_health
    CREATE USER [DBITESTnagios_mssql_health] FOR LOGIN [DBITESTnagios_mssql_health]
    EXEC sp_addrolemember ''db_nagios_mssql_health'', [DBITESTnagios_mssql_health]'

     

    To subscribe to an alert, new groups and contact can be defined.
    Those objects will be created in a contact configuration file which will be added to the config.cfg file of nagios with this line:

    # dbi-services contacts and contactgroups
    cfg_file=/usr/local/nagios/etc/objects/DBI_contacts.cfg

    The contact configuration file have the following structure:

     b2ap3_thumbnail_Nagios_SQLServer_nagios_b4.jpg

    I hope those information will help you to configure the SQL Server plugin for nagios and will give you service status details as the picture below:

     b2ap3_thumbnail_Nagios_SQLServer_nagios_b5.jpg

    See You.

    Detecting Source of MySQL Queries with Comments

    Pythian Group - Tue, 2015-02-17 19:22

    As a MySQL DBA I already know the data changes that happen on my system. I have logs for that.

    However, it’s a common problem that several years into the life of an application, the current developers won’t know where in the codebase queries come from. It’s often hard for them to find the location in the code if queries are formed dynamically; the pattern I show them to optimize doesn’t match anything in the code.

    I stumbled on a trick a couple years ago that has been invaluable in tracking down these problematic queries: query comments.

    Here’s an example:

    When a query generally shows up in a slow query log, it might look something like this:

    # Time: 150217 10:26:01
    # User@Host: comments[comments] @ localhost []  Id:    13
    # Query_time: 0.000231  Lock_time: 0.000108 Rows_sent: 3  Rows_examined: 3
    SET timestamp=1424186761;
    select * from cars;
    

    That logging shows me who executed the query (the comments user), the server it was executed from (localhost in this case), and what time it was executed (who, where, when).

    What this doesn’t tell me is where in my codebase this query is. For example, if I want to change that select * to a more targeted select column1, column2, we may not know where to find it.

    This is where comments help.

    Comments can be any of three styles:

    select * from cars; # comment to end of line here
    select * from cars; -- comment to end of line here
    select * from /* inline comment here */ cars;
    

    When you add a comment to your query, you’ll capture the comment. Here’s an example.

    On the command line:

    mysql> select * from cars; # valerie - command line - testing comments
    

    In the slow log:

    # Time: 150217 11:25:24
    # User@Host: comments[comments] @ localhost []  Id:     3
    # Query_time: 0.000246  Lock_time: 0.000119 Rows_sent: 3  Rows_examined: 3
    SET timestamp=1424190324;
    select * from cars # valerie - command line - testing comments;
    

    This is especially useful if you are forming queries dynamically and can add the name of the piece of code, function, user, etc. to the comment. Those comments would look something like this in the slow log:

    # Time: 150217 11:32:04
    # User@Host: comments[comments] @ localhost []  Id:     3
    # Query_time: 0.000225  Lock_time: 0.000104 Rows_sent: 3  Rows_examined: 3
    SET timestamp=1424190724;
    select * from cars # userid 7695424 - index.php?search=true - display function;
    

     

    Categories: DBA Blogs

    Case of monitoring snapshot replication from the subscriber side

    Yann Neuhaus - Tue, 2015-02-17 13:48

    I don’t work often with SQL Server replication. The main reason is that the number of customers that use replication is pretty low and each time that I have to deal with it, it’s a good opportunity to improve my skills on this area. A couple of months ago I had to face an interesting issue with the snapshot replication (yes, I was lucky ... this is not the more complex replication mechanism you have to deal with). My customer had a lot of databases (approximatively 250) to replicate from two datacenters apart from one continent.

    The global architecture includes two servers that act as publishers with a total of 250 articles (one table into one database) and a central subscriber that gathers this 250 articles (250 databases with one table into each database). All articles are concerned by push subscriptions and all replication jobs are running on the publishers. This replication process is part of a more wide ETL process which delivers some financial metric reports to the business users.

    Before computing the financial metrics we need to ensure that all databases replication is done correctly. Unfortunately, according to the customer security context we are stuck because we may not have access from the different publishers. So the main question that remains is how to ensure that we don’t start the calculation of the financial metrics, if we detect that there exist running database replication processes at the same time from the subscriber side?

    After reflexion, I had a solution but it is not perfect. I will explain why later in this blog post. My solution includes two things:

    1- Recording the last time the concerned table is created. As reminder snapshot replication will recreate concerned articles on the subscriber.

    2- Tracking bulk insert activity issued by the snapshot agent before performing the calculation of the financial metrics

    Here my script:

     

    SELECT      DB_NAME() as database_name,    tb.name as table_name,      tb.create_date as last_creation_date,    txt.text as current_statement,    txt.program_name as interface,    txt.request_mode,    txt.request_owner_type,    txt.request_status   FROM sys.tables as tb OUTER APPLY(    SELECT      db_name(r.database_id) as database_name,    t.text,    s.program_name,    l.request_mode,    l.request_status,    l.request_owner_type    FROM sys.dm_exec_requests AS r    JOIN sys.dm_exec_sessions as s    on r.session_id = s.session_id    JOIN sys.dm_tran_locks as l      on l.request_session_id = s.session_id    CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t    WHERE t.text LIKE 'insert bulk % + tb.name + ''%TABLOCK%'    AND r.session_id @@SPID    AND l.resource_type = 'OBJECT'    AND l.resource_database_id = r.database_id        AND l.resource_associated_entity_id = tb.object_id ) AS txt WHERE tb.name = 'mytable'

     

    I finally created a stored procedure that fetches each concerned database and  execute the above script in the current context of the database. In the context of my customer I implemented an additional step which sends an email on half-day basis. Notice also that I added a custom business rule that detects replication issue if it does not occur before the last 6 hours. You can modify and adjust the script at your convenience.

     

    SET NOCOUNT ON;   DECLARE @database_name SYSNAME; DECLARE @sql NVARCHAR(MAX);   -- working table : alert_replication_monitoring TRUNCATE TABLE msdb.dbo.alert_replication_monitoring;   -- for each concered database we will verify if the t_replixxx table is updated -- from snapshot replication DECLARE C_DB CURSOR FAST_FORWARD FOR SELECT        name FROM sys.databases WHERE name LIKE 'repli_%';   OPEN C_DB;   FETCH NEXT FROM C_DB INTO @database_name;   WHILE @@FETCH_STATUS = 0 BEGIN        SET @sql = N'USE ' + QUOTENAME(@database_name) + '                                   IF EXISTS (SELECT 1                                                FROM sys.tables WHERE name LIKE N''t_repli%'')                            BEGIN                                   INSERT INTO msdb.dbo.alert_replication_monitoring                                   SELECT                                         DB_NAME() as database_name,                                         tb.name as table_name,                                         tb.create_date as last_creation_date,                                         txt.text as current_statement,                                         txt.program_name as interface,                                         txt.request_mode,                                         txt.request_owner_type,                                         txt.request_status                                FROM sys.tables as tb                               OUTER APPLY (                                                             SELECT                                                                    db_name(r.database_id) as database_name,                                                                    t.text,                                                                    s.program_name,                                                                    l.request_mode,                                                                    l.request_status,                                                                    l.request_owner_type                                                             FROM sys.dm_exec_requests AS r                                                                    JOIN sys.dm_exec_sessions as s                                                                           on r.session_id = s.session_id                                                                    JOIN sys.dm_tran_locks as l                                                                                 on l.request_session_id = s.session_id                                                             CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t                                                             WHERE t.text LIKE ''insert bulk %'' + tb.name + ''%TABLOCK%''                                                                    AND r.session_id @@SPID                                                                           AND l.resource_type = ''OBJECT''                                                                                  AND l.resource_database_id = r.database_id                                                                                        AND l.resource_associated_entity_id = tb.object_id                                                       ) AS txt                                   WHERE tb.name LIKE ''t_repli%''                            END';               EXEC sp_executesql@sql;               FETCH NEXT FROM C_DB INTO @database_name; END   CLOSE C_DB; DEALLOCATE C_DB;   -- Check status of each database replication SELECT * FROM (        SELECT              database_name AS [database],              table_name AS [table],              last_creation_date AS [last synchro],              DATEDIFF(MI, last_creation_date, CURRENT_TIMESTAMP) AS [time since the last update (minutes)],              current_statement,              CASE                     WHEN DATEDIFF(MI, last_creation_date, CURRENT_TIMESTAMP)                                   AND current_statement IS NOT NULL THEN '1 - Replication in progress.'                     WHEN DATEDIFF(MI, last_creation_date, CURRENT_TIMESTAMP)                                   AND current_statement IS NULL THEN '2 - Replication done.'                     WHEN DATEDIFF(MI, last_creation_date, CURRENT_TIMESTAMP) > 360                                   AND current_statement IS NOT NULL THEN '3 - Replication in progress and takes longer than expected (6 hours).'                     ELSE '4 - No replication has occured during the six last hours.'              END replication_status        FROM msdb.dbo.alert_replication_monitoring ) AS repli ORDER BY [database], replication_status DESC;

     

    Here a picture of the script result while the snapshot agent is running...

     

    blog_31_1_replication_status_monitor

     

    ... and when there is no activity but we track the last time the table was synchronized.

     

    blog_31_2_replication_status_monitor_21

     

    As I said earlier, this method has a main following caveat:

    We may only claim that a replication process is not running at a given time but we may not know if the replication process is done correctly or with errors.

    Remember that this is more a workaround than a perfect solution.

    Hope it helps! Please feel free to share your comments about this script!

    APEX Jobs

    Denes Kubicek - Tue, 2015-02-17 11:44
    Alle, die eine Beschäftigung mit Oracle APEX suchen, sollten hier reinschauen:

    Joel's Blog

    Auch einige Stellenangebote aus Deutschland und Schweiz.



    Categories: Development

    dramatic differences of in memory scanning performance on range queries

    Karl Reitschuster - Tue, 2015-02-17 11:07
    Given following two identical tables, on which run the same SQL,  replicated in memory with oracle in-memory option - one table created out of the other.
    each tables covers 24m rows.
        
    Same structure ...

    Pythian’s Annual MySQL Community Dinner at Pedro’s

    Pythian Group - Tue, 2015-02-17 08:54

    Once again, Pythian is organizing an event that by now may be considered a tradition: The MySQL community dinner at Pedro’s! This dinner is open to all MySQL community members since many of you will be in town for Percona Live that week. Here are the details:

    What: The MySQL Community Dinner

    When: Monday April 13, 2015 –  7:00 PM at Pedro’s (You are welcome to show up later, too!)

    Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

    Cost: Tickets are $35 USD, Includes Mexican buffet, non-alcoholic drinks, taxes, and gratuities (Pedro’s Dinner Menu 2015)

    How: RSVP through Eventbrite

    Attendees:

    Laine Campbell

    Derek Downey

    Gillian Gunson

    Miklos Szel

    Marco Tusa

    Mark Filipi

    Alkin Tezuysal

    Brian Cain

    Brian Kelly

    Joe Muraski

    Patrick Pearson

    Looking forward to seeing you all at the event!

     

    Categories: DBA Blogs

    Database Flashback -- 4

    Hemant K Chitale - Tue, 2015-02-17 08:44
    Continuing my series on Oracle Database Flashback.  This post demonstrates a FLASHBACK DATABASE when a Tablespace is set to  FLASHBACK OFF.

    UPDATE 20-Feb-15 : When might I have FLASHBACK OFF for a Tablespace in a FLASHBACK ON Database ?  When, the Tablespace contents are "throwaway-able".  I can avoid the overheads of Flashback for DML in the Tablespace and I am confident that on the rare occasion that I have to Flashback the Database, I can discard and rebuild the tablespace and it's contents because they are not important for persistency.


    [oracle@localhost Hemant]$ sqlplus '/ as sysdba'

    SQL*Plus: Release 11.2.0.2.0 Production on Tue Feb 17 22:30:53 2015

    Copyright (c) 1982, 2010, Oracle. All rights reserved.


    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options

    SYS>select sysdate from dual;

    SYSDATE
    ---------
    17-FEB-15

    SYS>show parameter db_flashback_retention_target

    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    db_flashback_retention_target integer 1440
    SYS>select * from v$flashback_database_log;

    OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
    -------------------- --------- ---------------- -------------- ------------------------
    14573520 15-FEB-15 1440 190971904 0

    SYS>

    After my last post on 15/16-Feb (midnight), I had generated some transactions. The Flashback Size has grown from 24.576MB to 190.972MB.

    Question : Does Database Flashback always apply to the whole database ?  Can we exclude a selected tablespace ?

    SYS>create tablespace TEST_NOFB;

    Tablespace created.

    SYS>alter tablespace TEST_NOFB flashback OFF;

    Tablespace altered.

    SYS>

    So, it is possible to set FLASHBACK OFF for a tablespace !  What are the implications ?  Oracle does NOT save Flashback information for this tablespace.  If I need to Flashback the Database, this tablespace or all the datafiles in this tablespace must be taken OFFLINE or DROPped.

    UPDATE : 18-FEB-15

    Continuing the case.
    Note : Subsequent to last night's post I had some more activity on the database and even a RESTORE and RESETLOGS.  The v$flashback_database_log contents are different now.

    [oracle@localhost Hemant]$ sqlplus '/ as sysdba'

    SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 18 23:27:19 2015

    Copyright (c) 1982, 2010, Oracle. All rights reserved.


    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options

    SYS>select sysdate from dual;

    SYSDATE
    ---------
    18-FEB-15

    SYS>show parameter db_flashback_retention_target

    NAME TYPE VALUE
    ------------------------------------ ----------- ------------------------------
    db_flashback_retention_target integer 1440
    SYS>select * from v$flashback_database_log;

    OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
    -------------------- --------- ---------------- -------------- ------------------------
    14573526 17-FEB-15 1440 62783488 0

    SYS>

    Now, let me create a RESTORE POINT and then some activity in the TEST_NOFB database.

    SYS>create restore point FEB18_15;

    Restore point created.

    SYS>connect hemant/hemant
    Connected.
    HEMANT>create table test_nofb_tbl tablespace test_nofb
    2 as select * from dba_objects
    3 where 1=2;

    Table created.

    HEMANT>select tablespace_name
    2 from user_tables
    3 where table_name = 'TEST_NOFB_TBL'
    4 /

    TABLESPACE_NAME
    ------------------------------
    TEST_NOFB

    HEMANT>insert into test_nofb_tbl
    2 select * from dba_objects
    3 union all
    4 select * from dba_objects
    5 union all
    6 select * from dba_objects
    7 /

    225138 rows created.

    HEMANT>truncate table test_nofb_tbl;

    Table truncated.

    HEMANT>insert into test_nofb_tbl
    2 select * from dba_objects
    3 union all
    4 select * from dba_objects
    5 union all
    6 select * from dba_objects
    7 /

    225138 rows created.

    HEMANT>commit;

    Commit complete.

    HEMANT>connect / as sysdba
    Connected.
    SYS>select * from v$flashback_database_log;

    OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
    -------------------- --------- ---------------- -------------- ------------------------
    14573526 17-FEB-15 1440 62783488 646397952

    SYS>

    So, I have generated DML activity in the TEST_NOFB Tablespace.

    Let me define another Restore Point and generate more DML.

    SYS>create restore point FEB18_15_2nd;

    Restore point created.

    SYS>connect hemant/hemant
    Connected.
    HEMANT>insert into test_nofb_tbl
    2 select * from dba_objects
    3 union all
    4 select * from dba_objects
    5 union all
    6 select * from dba_objects
    7 /

    225138 rows created.

    HEMANT>commit;

    Commit complete.

    HEMANT>select count(*) from test_nofb_tbl;

    COUNT(*)
    ----------
    450276

    HEMANT>

    I now have two Restore Points and DML against the target table in a FLASHBACK OFF Tablespace between and after the Restore Points.

    Let my try to Flashback to the 2nd Restore Point.  What should I see ?  225138 rows in the table ? Or no rows in the table ?

    HEMANT>connect / as sysdba
    Connected.
    SYS>shutdown immediate
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SYS>startup mount
    ORACLE instance started.

    Total System Global Area 456146944 bytes
    Fixed Size 1344840 bytes
    Variable Size 394267320 bytes
    Database Buffers 54525952 bytes
    Redo Buffers 6008832 bytes
    Database mounted.
    SYS>
    SYS>flashback database to restore point FEB18_15_2nd;
    flashback database to restore point FEB18_15_2nd
    *
    ERROR at line 1:
    ORA-38753: Cannot flashback data file 6; no flashback log data.
    ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


    SYS>

    AAHA ! It cannot flashback the datafile because no flashback log exists for it.

    Let me quote again what I said yesterday :  If I need to Flashback the Database, this tablespace or all the datafiles in this tablespace must be taken OFFLINE or DROPped.

    So, I must take the necessary action :

    SYS>alter database datafile 6 offline;

    Database altered.

    SYS>flashback database to restore point FEB18_15_2nd;

    Flashback complete.

    SYS>alter database open read only;

    Database altered.

    SYS>
    SYS>alter database datafile 6 online;
    alter database datafile 6 online
    *
    ERROR at line 1:
    ORA-01113: file 6 needs media recovery
    ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


    SYS>recover datafile 6;
    ORA-00283: recovery session canceled due to errors
    ORA-38798: Cannot perform partial database recovery
    ORA-38797: Full database recovery required after a database has been flashed back


    SYS>

    I *cannot* selectively recover that datafile.  What options do I have ?
    In the meantime, let's look at the alert.log.

    Wed Feb 18 23:47:04 2015
    flashback database to restore point FEB18_15_2nd
    ORA-38753 signalled during: flashback database to restore point FEB18_15_2nd...
    Wed Feb 18 23:49:42 2015
    alter database datafile 6 offline
    Completed: alter database datafile 6 offline
    flashback database to restore point FEB18_15_2nd
    Flashback Restore Start
    Flashback Restore Complete
    Flashback Media Recovery Start
    Serial Media Recovery started
    Warning: Datafile 6 (/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf) is offline during full database recovery and will not be recovered
    Recovery of Online Redo Log: Thread 1 Group 3 Seq 3 Reading mem 0
    Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo03.log
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 4 Reading mem 0
    Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo01.log
    Incomplete Recovery applied until change 14577570 time 02/18/2015 23:40:29
    Flashback Media Recovery Complete
    Completed: flashback database to restore point FEB18_15_2nd
    Wed Feb 18 23:50:08 2015
    alter database open read only
    AUDIT_TRAIL initialization parameter is changed to OS, as DB is NOT compatible for database opened with read-only access
    ARCH: STARTING ARCH PROCESSES
    .....
    .....
    Completed: alter database open read only
    Wed Feb 18 23:50:10 2015
    .....
    .....
    Wed Feb 18 23:50:36 2015
    alter database datafile 6 online
    ORA-1113 signalled during: alter database datafile 6 online...
    Wed Feb 18 23:50:36 2015
    Signalling error 1152 for datafile 6!
    Checker run found 2 new persistent data failures
    ALTER DATABASE RECOVER datafile 6
    Media Recovery Start
    Serial Media Recovery started
    Media Recovery failed with error 38798
    ORA-283 signalled during: ALTER DATABASE RECOVER datafile 6 ...

    So, the datafile is unusable.
    I can only drop the tablespace.

    SYS>shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    SYS>startup mount;
    ORACLE instance started.

    Total System Global Area 456146944 bytes
    Fixed Size 1344840 bytes
    Variable Size 394267320 bytes
    Database Buffers 54525952 bytes
    Redo Buffers 6008832 bytes
    Database mounted.
    SYS>
    SYS>recover datafile 6;
    ORA-00283: recovery session canceled due to errors
    ORA-38798: Cannot perform partial database recovery
    ORA-38797: Full database recovery required after a database has been flashed back


    SYS>alter database open;
    alter database open
    *
    ERROR at line 1:
    ORA-01589: must use RESETLOGS or NORESETLOGS option for database open


    SYS>alter database open resetlogs;

    Database altered.

    SYS>
    SYS>recover datafile 6;
    ORA-00283: recovery session canceled due to errors
    ORA-19909: datafile 6 belongs to an orphan incarnation
    ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


    SYS>select tablespace_name from dba_data_files
    2 where file_id=6;

    TABLESPACE_NAME
    ------------------------------
    TEST_NOFB

    SYS>drop tablespace test_nofb;
    drop tablespace test_nofb
    *
    ERROR at line 1:
    ORA-01549: tablespace not empty, use INCLUDING CONTENTS option


    SYS>drop tablespace test_nofb including contents and datafiles;

    Tablespace dropped.

    SYS>

    There ! A Tablespace that was in FLASHBACK OFF mode cannot be accessed after a Flashback Database --- not even in respect to data that existed before the Restore Point that is used for the Flashback.
    .
    .
    .

    Categories: DBA Blogs

    Did your data survive Juno? [VIDEO]

    Chris Foot - Tue, 2015-02-17 08:09

    Transcript

    Hi, welcome to RDX! A couple of weeks ago, the Northeastern U.S. was hit by Juno, a blizzard that got us thinking about disaster recovery.

    Take insanely cold weather, two feet of snow and strong winds, you’ve got yourself the makings of an outage. There’s a chance all those tree limbs hanging over power lines may initiate a DR plan. Ensuring your data center’s generators are protected from the cold is an essential part of keeping systems online.

    But what if the worst occurs? This is where a database and server replication strategy comes into play. GoldenGate, a solution that supports log-based bidirectional data replication, can help you replicate applicable information and migrate it to online servers.

    Thanks for watching! If you want to learn about how RDX can help you develop a replication strategy, check out our services page in the transcript!

    The post Did your data survive Juno? [VIDEO] appeared first on Remote DBA Experts.

    Driving Productivity with Documents Cloud Service Powerful Mobile App

    WebCenter Team - Tue, 2015-02-17 08:04

    Author: Mark Paterson, Director, Oracle Documents Cloud Service Product Management

    More and more users are relying on their mobile smartphones and tablets to continue working outside of the office. We want to help businesses--large and small--achieve flexibility and agility in the way their users can work for maximum productivity and success, anywhere at any time.

    For users who rely on access to content this means giving them the means to browse all the content they have access to, ability to easily search for material, ability to discuss and act on content, and the ability to take content offline directly on their mobile devices.

    Oracle Documents Cloud Service (DOCS) offers mobile users apps for both iOS and Android.

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:EN-CA;}

    A Quick Look at Key Mobile Features:

    Oracle Documents Cloud Service allows you to:

    o Easily access, view and manage your files on your mobile device

    o Take content offline, ensuring that you always have access to your most important files

    o Upload pictures, documents and videos to quickly sync across devices and share with others

    o Share folders with other colleagues directly from your mobile device

    o Create public links that can be set to expire on a given date or protected with an access code

    o Reserve files you are editing to avoid conflicts

    o Open, review and edit files from other apps installed on your mobile device

    o Set a Passcode for your application

    · It is worth noting that all content is encrypted during transmission and when downloaded to your mobile device local storage.

    Today’s post focuses on maximizing the protection of your enterprise’s content by ensuring you use of application level passcode protection

    Setting a Passcode

    Much like our security codes on mobile devices, Passcodes are four-digit numbers that you set to lock the app when you're not using it.

    If you leave your device unattended and don't return within a specified time, you must enter the passcode. This helps to prevent unauthorized access to the app if your device is left active where others could use it. After five unsuccessful attempts, any content you have stored locally will be wiped from the device.

    On iOS:

    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);}

    Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:EN-CA;}

    1. Swipe the Browse screen to the right to show the navigation panel.

    2. Tap to access your settings.

    3. Tap Passcode Lock in the Security section.

    4. Tap Turn Passcode On.

    5. Enter the four digits to use as a passcode. Enter the digits again for confirmation.

    6. Tap Require Passcode and choose the amount of time that has to pass before you need to enter the passcode. For example, if you choose After 1 minute, then you leave the app and return after more than one minute, you have to enter the passcode to use the app again.

    On Android:

    1. Tap to open the navigation panel, then tap to access your settings.

    2. In the Security section, tap Enable Passcode.

    3. Enter the four digits to use as a passcode, then re-enter the digits to confirm.

    4. Tap Prompt for Passcode and choose the amount of time that has to pass before you need to enter the passcode.

    Oracle Documents Cloud Service, thus, enables you to securely share and store your files using your mobile device anywhere, everywhere.

    Hope you enjoyed this post. We will continue to explore the myriad of things you can do directly from your mobile device using Oracle Documents Cloud Service. Meanwhile, if there is a particular topic you would be interested in, do let us know.

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}