Skip navigation.

Feed aggregator

OAM/WebGate troubleshooting : WebGate on Apache/OHS Unable to read the configuration file

Online Apps DBA - Thu, 2015-05-14 04:15
This post is from one of customer engagement where we implemeted and now support complete Oracle Identity & Access Management ( Contact Us If you are looking for Oracle Support or Implementation Partner) .  When you protect a resource on Oracle Access Manager (OAM) you configure WebGate on WebServer (OHS, Apache or IIS) acting as Policy Enforcement Point (PEP). In OAM […] The post OAM/WebGate troubleshooting : WebGate on Apache/OHS...

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

does impdp into a compressed table really compress data?

Yann Neuhaus - Thu, 2015-05-14 00:29

Today at a customer we discussed the following scenario: To refresh a test database a datapump export and import was implemented. To save space on the test system the idea came up to compress the data on the test system. When we checked the documentation we came across the following statement:

Notes on analytic technology, May 13, 2015

DBMS2 - Wed, 2015-05-13 20:38

1. There are multiple ways in which analytics is inherently modular. For example:

  • Business intelligence tools can reasonably be viewed as application development tools. But the “applications” may be developed one report at a time.
  • The point of a predictive modeling exercise may be to develop a single scoring function that is then integrated into a pre-existing operational application.
  • Conversely, a recommendation-driven website may be developed a few pages — and hence also a few recommendations — at a time.

Also, analytics is inherently iterative.

  • Everything I just called “modular” can reasonably be called “iterative” as well.
  • So can any work process of the nature “OK, we got an insight. Let’s pursue it and get more accuracy.”

If I’m right that analytics is or at least should be modular and iterative, it’s easy to see why people hate multi-year data warehouse creation projects. Perhaps it’s also easy to see why I like the idea of schema-on-need.

2. In 2011, I wrote, in the context of agile predictive analytics, that

… the “business analyst” role should be expanded beyond BI and planning to include lightweight predictive analytics as well.

I gather that a similar point is at the heart of Gartner’s new term citizen data scientist. I am told that the term resonates with at least some enterprises. 

3. Speaking of Gartner, Mark Beyer tweeted

In data management’s future “hybrid” becomes a useless term. Data management is mutable, location agnostic and services oriented.

I replied

And that’s why I launched DBMS2 a decade ago, for “DataBase Management System SERVICES”. :)

A post earlier this year offers a strong clue as to why Mark’s tweet was at least directionally correct: The best structures for writing data are the worst for query, and vice-versa.

4. The foregoing notwithstanding, I continue to believe that there’s a large place in the world for “full-stack” analytics. Of course, some stacks are fuller than others, with SaaS (Software as a Service) offerings probably being the only true complete-stack products.

5. Speaking of full-stack vendors, some of the thoughts in this post were sparked by a recent conversation with Platfora. Platfora, of course, is full-stack except for the Hadoop underneath. They’ve taken to saying “data lake” instead of Hadoop, because they believe:

  • It’s a more benefits-oriented than geek-oriented term.
  • It seems to be more popular than the roughly equivalent terms “data hub” or “data reservoir”.

6. Platfora is coy about metrics, but does boast of high growth, and had >100 employees earlier this year. However, they are refreshingly precise about competition, saying they primarily see four competitors — Tableau, SAS Visual Analytics, Datameer (“sometimes”), and Oracle Data Discovery (who they view as flatteringly imitative of them).

Platfora seems to have a classic BI “land-and-expand” kind of model, with initial installations commonly being a few servers and a few terabytes. Applications cited were the usual suspects — customer analytics, clickstream, and compliance/governance. But they do have some big customer/big database stories as well, including:

  • 100s of terabytes or more (but with a “lens” typically being 5 TB or less).
  • 4-5 customers who pressed them to break a previous cap of 2 billion discrete values.

7. Another full-stack vendor, ScalingData, has been renamed to Rocana, for “root cause analysis”. I’m hearing broader support for their ideas about BI/predictive modeling integration. For example, Platfora has something similar on its roadmap.

Related links

  • I did a kind of analytics overview last month, which had a whole lot of links in it. This post is meant to be additive to that one.
Categories: Other

Notes on analytic technology, May 13, 2015

Curt Monash - Wed, 2015-05-13 20:38

1. There are multiple ways in which analytics is inherently modular. For example:

  • Business intelligence tools can reasonably be viewed as application development tools. But the “applications” may be developed one report at a time.
  • The point of a predictive modeling exercise may be to develop a single scoring function that is then integrated into a pre-existing operational application.
  • Conversely, a recommendation-driven website may be developed a few pages — and hence also a few recommendations — at a time.

Also, analytics is inherently iterative.

  • Everything I just called “modular” can reasonably be called “iterative” as well.
  • So can any work process of the nature “OK, we got an insight. Let’s pursue it and get more accuracy.”

If I’m right that analytics is or at least should be modular and iterative, it’s easy to see why people hate multi-year data warehouse creation projects. Perhaps it’s also easy to see why I like the idea of schema-on-need.

2. In 2011, I wrote, in the context of agile predictive analytics, that

… the “business analyst” role should be expanded beyond BI and planning to include lightweight predictive analytics as well.

I gather that a similar point is at the heart of Gartner’s new term citizen data scientist. I am told that the term resonates with at least some enterprises. 

3. Speaking of Gartner, Mark Beyer tweeted

In data management’s future “hybrid” becomes a useless term. Data management is mutable, location agnostic and services oriented.

I replied

And that’s why I launched DBMS2 a decade ago, for “DataBase Management System SERVICES”. :)

A post earlier this year offers a strong clue as to why Mark’s tweet was at least directionally correct: The best structures for writing data are the worst for query, and vice-versa.

4. The foregoing notwithstanding, I continue to believe that there’s a large place in the world for “full-stack” analytics. Of course, some stacks are fuller than others, with SaaS (Software as a Service) offerings probably being the only true complete-stack products.

5. Speaking of full-stack vendors, some of the thoughts in this post were sparked by a recent conversation with Platfora. Platfora, of course, is full-stack except for the Hadoop underneath. They’ve taken to saying “data lake” instead of Hadoop, because they believe:

  • It’s a more benefits-oriented than geek-oriented term.
  • It seems to be more popular than the roughly equivalent terms “data hub” or “data reservoir”.

6. Platfora is coy about metrics, but does boast of high growth, and had >100 employees earlier this year. However, they are refreshingly precise about competition, saying they primarily see four competitors — Tableau, SAS Visual Analytics, Datameer (“sometimes”), and Oracle Data Discovery (who they view as flatteringly imitative of them).

Platfora seems to have a classic BI “land-and-expand” kind of model, with initial installations commonly being a few servers and a few terabytes. Applications cited were the usual suspects — customer analytics, clickstream, and compliance/governance. But they do have some big customer/big database stories as well, including:

  • 100s of terabytes or more (but with a “lens” typically being 5 TB or less).
  • 4-5 customers who pressed them to break a previous cap of 2 billion discrete values.

7. Another full-stack vendor, ScalingData, has been renamed to Rocana, for “root cause analysis”. I’m hearing broader support for their ideas about BI/predictive modeling integration. For example, Platfora has something similar on its roadmap.

Related links

  • I did a kind of analytics overview last month, which had a whole lot of links in it. This post is meant to be additive to that one.

<div dir="ltr" style="text-align: left;

Vikram Das - Wed, 2015-05-13 17:11
Jim pinged me with this error today:
on ./adgendbc.sh i get4:19 PMCreating the DBC file...java.sql.SQLRecoverableException: No more data to read from socket raised validating GUEST_USER_PWDjava.sql.SQLRecoverableException: No more data to read from socket4:19 PMUpdating Server Security Authenticationjava.sql.SQLException: Invalid number format for port numberDatabase connection to jdbc:oracle:thin:@host_name:port_number:database failed4:19 PMto this point, this is what i've tried.4:19 PMclean, autoconfid on db tier, autoconfig on cm same results4:20 PMbounced db and listener.. same thing.. nothing i've done has made a difference
I noticed that when this error was coming the DB alert log was showing:
Wed May 13 18:50:51 2015Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x8] [PC:0x10A2FFBC8, joet_create_root_thread_group()+136] [flags: 0x0, count: 1]Errors in file /evnapsd1/admin/diag/rdbms/evnapsd1/evnapsd1/trace/evnapsd1_ora_14528.trc  (incident=1002115):ORA-07445: exception encountered: core dump [joet_create_root_thread_group()+136] [SIGSEGV] [ADDR:0x8] [PC:0x10A2FFBC8] [Address not mapped to object] []Incident details in: /evnapsd1/admin/diag/rdbms/evnapsd1/evnapsd1/incident/incdir_1002115/evnapsd1_ora_14528_i1002115.trc
Metalink search revealed this article:
Java Stored Procedure Fails With ORA-03113 And ORA-07445[JOET_CREATE_ROOT_THREAD_GROUP()+145] (Doc ID 1995261.1)
It seems that the post patch steps for a PSU OJVM patch were not done.  We followed the steps given in above note were note completed. We completed these and adgendbc.sh completed successfully after that.

1.set the following init parameters so that JIT and job process do not start.

If spfile is used:

SQL> alter system set java_jit_enabled = FALSE;
SQL> alter system set "_system_trig_enabled"=FALSE;
SQL> alter system set JOB_QUEUE_PROCESSES=0;

2. Startup instance in restricted mode and run postinstallation step.

SQL> startup restrict

3.Run the postinstallation steps of OJVM PSU(Step 3.3.2 from readme)PostinstallationThe following steps load modified SQL files into the database. For an Oracle RAC environment, perform these steps on only one node.
  1. Install the SQL portion of the patch by running the following command. For an Oracle RAC environment, reload the packages on one of the nodes.
2. cd $ORACLE_HOME/sqlpatch/192820153. sqlplus /nolog4. SQL> CONNECT / AS SYSDBA5. SQL> @postinstall.sql
  1. After installing the SQL portion of the patch, some packages could become INVALID. This will get recompiled upon access or you can run utlrp.sql to get them back into a VALID state.
7. cd $ORACLE_HOME/rdbms/admin8. sqlplus /nolog9. SQL> CONNECT / AS SYSDBASQL> @utlrp.sql

4. Reset modified init parameters

SQL> alter system set java_jit_enabled = true;
SQL> alter system set "_system_trig_enabled"=TRUE;
SQL> alter system set JOB_QUEUE_PROCESSES=10;
        -- or original JOB_QUEUE_PROCESSES value

5.Restart instance as normal6.Now execute the Java stored procedure.

Ran adgendbc.sh and it worked fine.
Categories: APPS Blogs

Ingest a Single Table from Microsoft SQL Server Data into Hadoop

Pythian Group - Wed, 2015-05-13 15:13
Introduction

This blog describes the best-practice approach in regards to the data ingestion from SQL Server into Hadoop. The case scenario is described as under:

  • Single table ingestion (no joins)
  • No partitioning
  • Complete data ingestion (trash old and replace new)
  • Data stored in Parquet format
Pre-requisites

This example has been tested using the following versions:

  • Hadoop 2.5.0-cdh5.3.0
  • Hive 0.13.1-cdh5.3.0
  • Sqoop 1.4.5-cdh5.3.0
  • Oozie client build version: 4.0.0-cdh5.3.0
Process Flow Diagram process_flow1 Configuration
  • Create the following directory/file structure (one per data ingestion process). For a new ingestion program please adjust the directory/file names as per requirements. Make sure to replace the
    tag with your table name
<table_name>_ingest + hive-<table_name> create-schema.hql + oozie-properties <table_name>.properties + oozie-<table_name>-ingest + lib kite-data-core.jar
kite-data-mapreduce.jar
sqljdbc4.jar coordinator.xml
impala_metadata.sh
workflow.xml
  • The ingestion process is invoked using an oozie workflow. The workflow invokes all steps necessary for data ingestion including pre-processing, ingestion using sqoop and post-processing.
oozie-<table_name>-ingest
This directory stores all files that are required by the oozie workflow engine. These files should be stored in HDFS for proper functioning of oozie oozie-properties
This directory stores the <table_name>.properties. This file stores the oozie variables such as database users, name node details etc. used by the oozie process at runtime. hive-<table_name>
This directory stores a file called create-schema.hql  which contains the schema definition of the HIVE tables. This file is required to be run in HIVE only once.
  • Configure files under oozie-<table_name>-ingest
1.   Download kite-data-core.jar and kite-data-mapreduce.jar files from http://mvnrepository.com/artifact/org.kitesdk
2.  Download sqljdbc4.jar from https://msdn.microsoft.com/en-us/sqlserver/aa937724.aspx 3.  Configure coordinator.xml. Copy and paste the following XML. <coordinator-app name=”<table_name>-ingest-coordinator” frequency=”${freq}” start=”${startTime}” end=”${endTime}” timezone=”UTC” xmlns=”uri:oozie:coordinator:0.2″>
<action>
<workflow>
<app-path>${workflowRoot}/workflow.xml</app-path>
<configuration>
<property>
<name>partition_name</name>
<value>${coord:formatTime(coord:nominalTime(), ‘YYYY-MM-dd’)}</value>
</property>
</configuration>
</workflow>
</action>
</coordinator-app>

4.  Configure workflow.xml. This workflow has three actions:

a) mv-data-to-old – Deletes old data before refreshing new
b) sqoop-ingest-<table_name> – Sqoop action to fetch table from SQL Server
c) invalidate-impala-metadata – Revalidate Impala data after each refresh Copy and paste the following XML. <workflow-app name=”<table_name>-ingest” xmlns=”uri:oozie:workflow:0.2″><start to=”mv-data-to-old” /><action name=”mv-data-to-old”>
<fs>
<delete path=’${sqoop_directory}/<table_name>/*.parquet’ />
<delete path=’${sqoop_directory}/<table_name>/.metadata’ />
</fs><ok to=”sqoop-ingest-<table_name>”/>
<error to=”kill”/>
</action><action name=”sqoop-ingest-<table_name>”>
<sqoop xmlns=”uri:oozie:sqoop-action:0.3″>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path=”${nameNode}/user/${wf:user()}/_sqoop/*” />
</prepare><configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration><arg>import</arg>
<arg>–connect</arg>
<arg>${db_string}</arg>
<arg>–table</arg>
<arg>${db_table}</arg>
<arg>–columns</arg>
<arg>${db_columns}</arg>
<arg>–username</arg>
<arg>${db_username}</arg>
<arg>–password</arg>
<arg>${db_password}</arg>
<arg>–split-by</arg>
<arg>${db_table_pk}</arg>
<arg>–target-dir</arg>
<arg>${sqoop_directory}/<table_name></arg>
<arg>–as-parquetfile</arg>
<arg>–compress</arg>
<arg>–compression-codec</arg>
<arg>org.apache.hadoop.io.compress.SnappyCodec</arg>
</sqoop><ok to=”invalidate-impala-metadata”/>
<error to=”kill”/>
</action><action name=”invalidate-impala-metadata”>
<shell xmlns=”uri:oozie:shell-action:0.1″>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node><configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>${impalaFileName}</exec>
<file>${impalaFilePath}</file>
</shell>
<ok to=”fini”/>
<error to=”kill”/>
</action>
<kill name=”kill”>
<message>Workflow failed with error message ${wf:errorMessage(wf:lastErrorNode())}</message>
</kill><end name=”fini” /></workflow-app>

5. Configure impala_metadata.sh. This file will execute commands to revalidate impala metadata after each restore. Copy and paste the following data.

#!/bin/bash
export PYTHON_EGG_CACHE=./myeggs
impala-shell -i <hive_server> -q “invalidate metadata <hive_db_name>.<hive_table_name>”
  • Configure files under oozie-properties. Create file oozie.properties with contents as under. Edit the parameters as per requirements.
# Coordinator schedulings
freq=480
startTime=2015-04-28T14:00Z
endTime=2029-03-05T06:00Z jobTracker=<jobtracker>
nameNode=hdfs://<namenode>
queueName=<queue_name> rootDir=${nameNode}/user//oozie
workflowRoot=${rootDir}/<table_name>-ingest oozie.use.system.libpath=true
oozie.coord.application.path=${workflowRoot}/coordinator.xml # Sqoop settings
sqoop_directory=${nameNode}/data/sqoop # Hive/Impala Settings
hive_db_name=<hive_db_name>
impalaFileName=impala_metadata.sh
impalaFilePath=/user/oozie/<table_name>-ingest/impala_metadata.sh #impala_metadata.sh # MS SQL Server settings
db_string=jdbc:sqlserver://;databaseName=<sql_server_db_name>
db_username=<sql_server_username>
db_password=<sql_server_password>
db_table=<table_name>
db_columns=<columns>
  • Configure files under hive-<table_name>. Create a new file create-schema.hql with contents as under.
DROP TABLE IF EXISTS ;CREATE EXTERNAL TABLE ()
STORED AS PARQUET
LOCATION ‘hdfs:///data/sqoop/<table_name>'; Deployment
  • Create new directory in HDFS and copy files
$ hadoop fs -mkdir /user/<user>/oozie/<table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/lib /user/<user>/oozie/ <table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/ coordinator.xml /user/<user>/oozie/ <table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/ impala_metadata.sh /user/<user>/oozie/<table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/ workflow.xml /user/<user>/oozie/ <table_name>-ingest
  • Create new directory in HDFS for storing data files
$ hadoop fs -mkdir /user/SA.HadoopPipeline/oozie/<table_name>-ingest
$ hadoop fs -mkdir /data/sqoop/<table_name>
  • Now we are ready to select data in HIVE. Go to URL http://<hive_server>:8888/beeswax/#query.
a. Choose existing database on left or create new.
b. Paste contents of create-schema.hql in Query window and click Execute.
c. You should now have an external table in HIVE pointing to data in hdfs://<namenode>/data/sqoop/<table_name>
  • Create Oozie job
a. Choose existing database on left or create new.
$ oozie job -run -config /home/<user>/<<directory>/<table_name>/oozie-properties/oozie.properties Validation and Error Handling
  • At this point an oozie job should be created. To validate the oozie job creation open URL http://<hue_server>:8888/oozie/list_oozie_coordinators. Expected output as under. In case of error please review the logs for recent runs.
 oozie1
  • To validate the oozie job is running open URL http://<hue_server>:8888/oozie/list_oozie_workflows/ . Expected output as under. In case of error please review the logs for recent runs.
 oozie2
  • To validate data in HDFS execute the following command. You should see a file with *.metadata extension and a number of files with *.parquet extension.
$ hadoop fs -ls /data/sqoop/<table_name>/
  • Now we are ready to select data in HIVE or Impala.
    For HIVE go to URL http://<hue_server>:8888/beeswax/#query
    For Impala go to URL http://<hue_server>:8888/impala
    Choose the newly created database on left and execute the following SQL – select * from <hive_table_name> limit 10
    You should see the the data being outputted from the newly ingested data.
Categories: DBA Blogs

The Ping of Mild Annoyance Attack and other Linux Adventures

The Anti-Kyte - Wed, 2015-05-13 14:46

Sometimes, it’s the simple questions that are the most difficult to answer.
For example, how many votes does it take to get an MP elected to the UK Parliament ?
The answer actually ranges from around 20,000 to several million depending on which party said MP is standing for.
Yes, our singular electoral system has had another outing. As usual, one of the main parties has managed to win a majority of seats despite getting rather less than half of the votes cast ( in this case 37%).

Also, as has become traditional, they have claimed to have “a clear instruction from the British People”.
Whenever I hear this, can’t help feeling that the “instruction” is something along the lines of “don’t let the door hit you on the way out”.

Offering some respite from the mind-bending mathematics that is a UK General Election, I’ve recently had to ask a couple of – apparently – simple questions with regard to Linux…

How do I list the contents of a zip file on Linux ?

More precisely, how do I do this on the command line ?

Let’s start wit a couple of csv files. First questions.csv :

question number, text
1,How many pings before it gets annoying ?
2,Where am I ?
3,How late is my train ?
4,What's in the zip ?
5,Fancy a game of Patience ?

Now answers.csv :

answer number, answer
1,6
2,Try hostname
3,Somewhere between a bit and very
4,Depends what type of zip
5,No!

Now we add these into a zip archive :

zip wisdom.zip questions.csv answers.csv
  adding: questions.csv (deflated 21%)
  adding: answers.csv (deflated 10%)

If you now want to check the contents of wisdom.zip, rather than finding the appropriate switch for the zip command, you actually need to use unzip….

unzip -l wisdom.zip
Archive:  wisdom.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
      156  04-29-2015 19:21   questions.csv
      109  04-29-2015 19:23   answers.csv
---------                     -------
      265                     2 files

If you want to go further and actually view the contents of one of the files in the zip….

unzip -c wisdom.zip answers.csv
Archive:  wisdom.zip
  inflating: answers.csv             
answer number, answer
1,6
2,Try hostname
3,Somewhere between a bit and very
4,Depends what type of zip
5,No!
The thing about PING

Say you have a script that checks that another server on the network is available, as a prelude to transferring files to it.
On Solaris, it may well do this via the simple expedient of…

ping

Now, whilst ping has been around for decades and is implemented on all major operating systems, the implementations differ in certain subtle ways.
Running it with no arguments on Solaris will simply issue a single ping to check if the target machine is up.
On Windows, it will attempt to send and recieve 4 packets and report the round-trip time for each.
On Linux however….

ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.032 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.087 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.088 ms
64 bytes from localhost (127.0.0.1): icmp_seq=4 ttl=64 time=0.098 ms
64 bytes from localhost (127.0.0.1): icmp_seq=5 ttl=64 time=0.096 ms
64 bytes from localhost (127.0.0.1): icmp_seq=6 ttl=64 time=0.097 ms
64 bytes from localhost (127.0.0.1): icmp_seq=7 ttl=64 time=0.095 ms
64 bytes from localhost (127.0.0.1): icmp_seq=8 ttl=64 time=0.099 ms
64 bytes from localhost (127.0.0.1): icmp_seq=9 ttl=64 time=0.096 ms
64 bytes from localhost (127.0.0.1): icmp_seq=10 ttl=64 time=0.100 ms
64 bytes from localhost (127.0.0.1): icmp_seq=11 ttl=64 time=0.066 ms
^C
--- localhost ping statistics ---
11 packets transmitted, 11 received, 0% packet loss, time 9997ms
rtt min/avg/max/mdev = 0.032/0.086/0.100/0.022 ms

Yep, it’ll just keep going until you cancel it.

If you want to avoid initiating what could be considered a very half-hearted Denial of Service attack on your own server, then it’s worth remembering that you can specify the number of packets that ping will send.
So…

ping -c1 localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.079 ms

--- localhost ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms

…is probably more what you’re after. This will exit with 0 if the target is up, as can be demonstrated using the script below (called you_up.sh)…

#!/bin/sh
ping -c1 localhost >/dev/null
if [ $? -ne 0 ]; then
    echo 'Something has gone horribly wrong'
else
    echo 'All OK'
fi
exit 0

Run this and we get…

sh you_up.sh
All OK

The long-suffering British electorate isn’t getting too much of a break. We now have the prospect of a Referendum on the UK’s EU membership to look forward to. On the plus side, it should be a bit easier to work out which side wins.


Filed under: Linux, Shell Scripting Tagged: ping -c, reading contents of a zip archive, specify number of packets to send using ping, unzip -c, unzip -l, zip

Simple C program for testing disk performance

Bobby Durrett's DBA Blog - Wed, 2015-05-13 13:48

I dug up a simple C program that I wrote years ago to test disk performance.  I hesitated to publish it because it is rough and limited in scope and other more capable tools exist. But, I have made good use of it so why not share it with others?  It takes a file name and the size of the file in megabytes.  It sequentially writes the file in 64 kilobyte chunks.  It opens the file in synchronous mode so it must write the data to disk before returning to the program. It outputs the rate in bytes/second that the program wrote to disk.

Here is a zip of the code: zip

There is no error checking so if you put in an invalid file name you get no message.

Here is how I ran it in my HP-UX and Linux performance comparison tests:

HP-UX:

$ time ./createfile /var/opt/oracle/db01/bobby/test 1024
Bytes per second written = 107374182

real 0m10.36s
user 0m0.01s
sys 0m1.79s

Linux:

$ time ./createfile /oracle/db01/bobby/test 1024
Bytes per second written = 23860929

real 0m45.166s
user 0m0.011s
sys 0m2.472s

It makes me think that my Linux system’s write I/O is slower.  I found a set of arguments to the utility dd that seems to do the same thing on Linux:

$ dd if=/dev/zero bs=65536 count=16384 of=test oflag=dsync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 38.423 s, 27.9 MB/s

But I couldn’t find an option like dsync on the HP-UX version of dd.  In any case, it was nice to have the C code so I could experiment with various options to open().  I used tusc on hp-ux and strace on Linux and found the open options to some activity in the system tablespace.  By grepping for open I found the options Oracle uses:

hp trace

open("/var/opt/oracle/db01/HPDB/dbf/system01.dbf", O_RDWR|0x800|O_DSYNC, 030) = 8

linux trace

open("/oracle/db01/LINUXDB/dbf/system01.dbf", O_RDWR|O_DSYNC) = 8

So, I modified my program to use the O_DSYNC flag and it was the same as using O_SYNC.  But, the point is that having a simple C program lets you change these options to open() directly.

I hope this program will be useful to others as it has to me.

– Bobby

p.s. Similar program for sequentially reading through file, but with 256 K buffers: zip

Categories: DBA Blogs

<b>Contribution by Angela Golla,

Oracle Infogram - Wed, 2015-05-13 12:54
Contribution by Angela Golla, Infogram Deputy Editor

Managing Millenials - The Shifting Workforce
A significant portion of US workers will be leaving the workforce. At the same time, a new generation is entering the workforce with an entirely different set of expectations. According to a PwC report, millennials already form 25% of the workforce in the US. By 2020, millennials will form 50% of the global workforce.  Read  Mark Hurd's interesting take on this major shift on LinkedIn

Four Weeks and a Day with the Jawbone UP24

Oracle AppsLab - Wed, 2015-05-13 12:46

After three weeks with the Nike+ Fuelband and four weeks with the Basis Peak, I moved on to the Jawbone UP24.

The UP24 has been out for quite a while now. Back in January 2014, Noel (@noelportugal) and Luis (@lsgaleana) did cursory evaluation, and not much has changed in the Jawbone lineup since then.

At least, not until recently when the new hotness arrived, the UP2, UP3 and soon, the UP4, pushing the venerable UP24 into retirement. Honestly, I would have bought one of the new ones (because shiny objects), but they had yet to be released when I embarked on this journey of wearables discovery.

After starting out with a fitness band and moving to a super watch, going back to the comparatively feature-poor UP24 was a bit shocking initially. I had just become accustomed to having the time on my wrist and all that other stuff.

However, what it lacks in features, the UP24 more than makes up for in comfort. Makes sense, fewer features, smaller form factor, but even compared to the other fitness bands I’ve worn (the Fuelband and Misfit Shine), the rubbery industrial design makes it nice to wear.

Aside from comfort, surprisingly, one feature that made the UP24 sticky and enjoyable was the Smart Coach, which I expected to dislike. Jawbone has a very usable mobile app companion that all its devices share, and inevitably, that is what retains users, not the hardware on the wrist.

Overall, despite its relative age, I enjoyed wearing the UP24. I even decided to wear it a bit longer, hence the extra day.

IMG_20150512_091139

Here are my observations.

The band

Yes, there’s yet another initial software install required to configure the UP24 for use the first time. Yes, that still annoys me, but I get why it’s needed.

As I’ve said, the band is comfortable to wear, mainly because of its flexible, rubber material. Smart Coach reminded me a few times to be gentle with the band, saying something about there being a bunch of electronics packed in there.

I’m not sure if this was a regular reminder or if the band somehow detected that I was being too rough, hoping for the former. The Coach also reminded me that the band isn’t waterproof. While I did get it wet, I wasn’t brave enough to submerge it.

These reminders made me curious about the sensors Jawbone packed inside the UP24, and while looking for a teardown, I found this cool X-ray of the band.

JawboneUp24-X-Ray1

Image from Creative Electron

Impressive industrial design. One minor correction, the audio plug is 2.5 mm, not the standard 3.5 mm, something Noel and Luis found out quickly. From my use, it didn’t really matter, since the UP24 comes with a custom USB-2.5 mm audio adapter for charging.

IMG_20150405_100135

 

The UP24 uses a button to set specific modes, like Stopwatch (for exercise) and Sleep. These took a bit of learning, like anything new. I expected to have push-sequence failure, i.e. using the wrong push and hold combination, but no.

Aside from being red, which seemed to fade to orange, the band is unobtrusive. I found myself wearing it upside down to allow for scratch-free typing, a very nice plus.

The fit did seem to loosen over time, probably just the rubber losing some of its elasticity. Not a big deal for a month, but not a good long-term sign.

The battery life was nice, about nine days initially, but the app seems to misrepresent the remaining charge. One night, it reported five days charge left, and overnight, the band died. Same thing happened a week later when the app reported seven days of charge.

Because the UP24 isn’t constantly connected to Bluetooth, to save battery, I guess maybe the charge wasn’t reported accurately. Although when the app opens, the band connects and dumps its data right away.

Bit of a mystery, but happily, I didn’t lose my sleep data, which tells me the band still had some charge. The sleep data it collected on those nights wasn’t as detailed as the other nights. Maybe the band has some intelligence to preserve its battery.

Sleep data from a low battery. Sleep data from a charged battery

The UP24 didn’t attract the same amount of curious attention that the Basis Peak did, thank you Apple Watch, but a few people did ask what Fitbit I had, which tells me a lot about their brand recognition.

Is Fitbit the Kleenex of facial tissue? The Reynolds wrap of aluminum foil?

The app and data

Jawbone only provides the data collected by its bands and the Smart Coach through its mobile apps. Their web app only manages account information, which is fine, and bonus, you can download your device data in csv format from the web app.

There are, however, several different Jawbone UP mobile apps, so finding the right one was key.

The app is quite nice, both visually and informationally. I really like the stream approach (vs. a dashboard), and again, Smart Coach is nice. Each day, I checked my sleep data and read the tips provided, and yeah, some were interesting.

The stream is easily understood at a glance, so kudos to the UX. Orange shows activity, purple sleep. There are other things you can add, weight, mood, etc. I did those for a few days, but that didn’t last, too lazy.

Screenshot_2015-05-12-09-14-29 Screenshot_2015-05-12-09-14-34 Screenshot_2015-05-12-09-14-44

Each item in the stream can be tapped for details.

Unlike the Fuelband and the Peak, the UP24 uses very minimal game mechanics. The Smart Coach did congratulate me on specific milestones and encourage me to do more, but beyond that, the entire experience was free from gamified elements.

-63993410 Screenshot_2015-05-07-07-31-18 Screenshot_2015-05-05-16-19-33

Did I mention I liked the Smart Coach? Yeah, I did.

In addition to the stream, the UP24 provides historic data as days and aggregated into months and years, which is equally nice and easy to understand.

Screenshot_2015-05-12-09-15-04 Screenshot_2015-05-12-09-15-08

Jawbone has an integration with IFTTT among many other apps, making its ecosystem attractive to developers. I didn’t find any IFTTT recipes that made sense for me, but I like having the option.

There’s social stuff too, but meh.

Data sync between the band and app was snappy. As I mentioned above, the band isn’t always connected to Bluetooth, or at least, you won’t see it in the Bluetooth settings. Maybe it’s connected but not listed, dunno, but Noel would.

Minor downsides I noticed, sleep tracking is an absolute mystery. The UP24 lists both light and deep sleep, but who knows how it can tell. Not that I really need to know, but looking at its guts above, what combination of sensor data would track that?

Speaking of sensors, nearly every run I completed on a treadmill showed a wide variance, e.g. the treadmill says 3.25 miles, whereas UP24 says 2.45 miles. I tried calibrating the band after each run, but that didn’t seem to help.

I saw the same variance with steps.

Not a bid deal to me and definitely a difficult nut to crack, but some people care deeply about the accuracy of theses devices, like this guy who filed a lawsuit against Fitbit for overestimating sleep.

What I’m finding through personal experience and stories like that is that these little guys are very personal devices, much more so than a simple watch. I actually felt a little sad to take off my UP24.

I wonder why. Thoughts?

Find the comments.Possibly Related Posts:

Instance stats

Jonathan Lewis - Wed, 2015-05-13 12:31

+

While reading a posting by Martin Bach on a new buffering option for 12c I was prompted to take a look at another of his posts on the instance activity stats, which reminded me that the class column on v$statname is a bit flag, which we can dissect using the bitand() function to pick out the statistics that belong to multiple classes. I’ve got 2 or 3 little scripts that do this one, for example, picks out all the statistics relating to RAC, another is just a cross-tab of the class values used and their breakdown by class.  Originally this latter script used the “diagonal” method of decode() then sum() – but when the 11g pivot() option appeared I used it as an experiment on pivoting.

This is the script as it now stands, with the output from 12.1.0.2




select
        *
from    (
        select
                st.class,
                pwr.class_id,
                case bitand(st.class, pwr.expn)
                        when 0 then to_number(null)
                               else 1
                end     class_flag
        from
                v$statname      st,
                (select
                        level                   class_id,
                        power(2,level - 1)      expn
                from
                        dual
                connect by level <= 8
                )       pwr
        where
                bitand(class,pwr.expn) = pwr.expn
        )
pivot   (
                sum(class_flag)
        for     class_id in (
                        1 as EndUser,
                        2 as Redo,
                        3 as Enqueue,
                        4 as Cache,
                        5 as OS,
                        6 as RAC,
                        7 as SQL,
                        8 as Debug
                )
        )
order by
        class
;


     CLASS    ENDUSER       REDO    ENQUEUE      CACHE         OS        RAC        SQL      DEBUG
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
         1        130
         2                    68
         4                                9
         8                                         151
        16                                                     16
        32                                                                35
        33          3                                                      3
        34                     1                                           1
        40                                          53                    53
        64                                                                          130
        72                                          15                               15
       128                                                                                     565
       192                                                                            2          2

13 rows selected.

The titles given to the columns come from Martin’s blog, but the definitive set is in the Oracle documentation in the reference manual for v$statname. (I’ve changed the first class from “User” to “EndUser” because of a reserved word problem, and I abbreviated the “RAC” class for tidiness.) It’s interesting to note how many of the RAC statistics are also about the Cache layer.


Ed Tech World on Notice: Miami U disability discrimination lawsuit could have major effect

Michael Feldstein - Wed, 2015-05-13 11:53

By Phil HillMore Posts (317)

This week the US Department of Justice, citing Title II of ADA, decided to intervene in a private lawsuit filed against Miami University of Ohio regarding disability discrimination based on ed tech usage. Call this a major escalation and just ask the for-profit industry how big an effect DOJ intervention can be. From the complaint:

Miami University uses technologies in its curricular and co-curricular programs, services, and activities that are inaccessible to qualified individuals with disabilities, including current and former students who have vision, hearing, or learning disabilities. Miami University has failed to make these technologies accessible to such individuals and has otherwise failed to ensure that individuals with disabilities can interact with Miami University’s websites and access course assignments, textbooks, and other curricular and co-curricular materials on an equal basis with non-disabled students. These failures have deprived current and former students and others with disabilities a full and equal opportunity to participate in and benefit from all of Miami University’s educational opportunities.

The complaint then calls out the nature of assistive technologies that should be available, including screen readers, Braille display, audio descriptions, captioning, and keyboard navigation. The complaint specifies that Miami U uses many technologies and content that is incompatible with these assistive technologies.

The complaint is very specific about which platforms and tools are incompatible:

  • The main website www.maimioh.edu
  • Vimeo and YouTube
  • Google Docs
  • TurnItIn
  • LearnSmart
  • WebAssign
  • MyStatLab
  • Vista Higher Learning
  • Sapling

Update: It is worth noting the usage of phrase “as implemented by Miami University” in most of these examples.

Despite the complaint listing the last 6 examples as LMS, it is notable that the complaint does not call out the school’s previous LMS (Sakai) nor its current LMS (Canvas). Canvas was selected last year to replace Sakai, and I believe both are in usage. Does this mean that Sakai and Canvas pass ADA muster? That’s my guess, but I’m not 100% sure.

The complaint is also quite specific about the Miami U services that are at fault. For example:

When Miami University has converted physical books and documents into digital formats for students who require such conversion because of their disabilities, it has repeatedly failed to do so in a timely manner. And Miami University has repeatedly provided these students with digitally-converted materials that are inaccessible when used with assistive technologies. This has made the books and documents either completely unusable, or very difficult to use, for the students with these disabilities.

Miami University has a policy or practice by which it converts physical texts and documents into electronic formats only if students can prove they purchased (rather than borrowed) the physical texts or documents. Miami University will not convert into digital formats any physical texts or documents from its library collections and it will not seek to obtain from other libraries existing copies of digitally-converted materials. This has rendered many of the materials that Miami University provides throughout its library system and which it makes available to its students unavailable to students who require that materials be converted into digital formats because of a disability.

The complaint also specifies the required use of clickers and content within PowerPoint.

This one seems to be a very big deal by nature of the DOJ intervention and the specifics of multiple technologies and services.

Thanks to Jim Julius for alerting me on this one.

.@PhilOnEdTech have you seen the Miami of Ohio accessibility complaint? This is going to generate shock waves. http://t.co/STA6Rw6nrR

— Jim Julius (@jjulius) May 13, 2015

The post Ed Tech World on Notice: Miami U disability discrimination lawsuit could have major effect appeared first on e-Literate.

Matching SQL Plan Directives and extended stats

Yann Neuhaus - Wed, 2015-05-13 11:33

This year is the year of migration to 12c. Each Oracle version had its CBO feature that make it challenging. The most famous was the bind variable peeking in 9iR2. Cardinality feedback in 11g also came with some surprises. 12c comes with SPD in any edition, which is accompanied by Adaptive Dynamic Sampling. If you want to know more about them, next date is in Switzerland: http://www.soug.ch/events/sig-150521-agenda.html

SQL Plan Directives in USABLE/MISSING_STATS state can create column groups and extended stats on it at the next dbms_stats gathering. When the next usage of the SPD validates that static statistics are sufficient to get good cardinality estimates, then the SPD goes into the SUPERSEDED/HAS_STATS state. If an execution still see misestimates on them, then the state will go to SUPERSEDED/PERMANENT and dynamic sampling will be used forever. Note that disabled SPD can still trigger the creation of extended statistics but not the dynamix sampling.

Query

If you want to match the directives (from SQL_PLAN_DIRECTIVES) with the extended statistics (from DBA_STATS_EXTENSION) there is no direct link. Both list the columns, but not in the same order and not in the same format:

SQL> select extract(notes,'/spd_note/spd_text/text()').getStringVal() from dba_sql_plan_directives where directive_id in ('11620983915867293627','16006171197187894917');

EXTRACT(NOTES,'/SPD_NOTE/SPD_TEXT/TEXT()').GETSTRINGVAL()
--------------------------------------------------------------------------------
{ECJ(STOPSYS.EDGE)[CHILDID, CHILDTYPE, EDGETYPE]}
{EC(STOPSYS.EDGE)[CHILDID, CHILDTYPE, EDGETYPE]}

those SPD has been responsible for the creation of following column groups:
SQL> select owner,table_name,extension from dba_stat_extensions where extension_name='SYS_STSDXN5VXXKAWUPN9AEO8$$W$J';

OWNER    TABLE_NA EXTENSION
-------- -------- ------------------------------------------------------------
STOPSYS  EDGE     ("CHILDTYPE","CHILDID","EDGETYPE")

So I've made the following query to match both:

SQL> column owner format a8
SQL> column table_name format a30
SQL> column columns format a40 trunc
SQL> column extension_name format a20
SQL> column internal_state format a9
SQL>
SQL> select * from (
    select owner,table_name,listagg(column_name,',')within group(order by column_name) columns
     , extension_name
    from dba_tab_columns join dba_stat_extensions using(owner,table_name)
    where extension like '%"'||column_name||'"%'
    group by owner,table_name,extension_name
    order by owner,table_name,columns
    ) full outer join (
    select owner,object_name table_name,listagg(subobject_name,',')within group(order by subobject_name) columns
     , directive_id,max(extract(dba_sql_plan_directives.notes,'/spd_note/internal_state/text()').getStringVal()) internal_state
    from dba_sql_plan_dir_objects join dba_sql_plan_directives using(directive_id)
    where object_type='COLUMN' and directive_id in (
        select directive_id
        from dba_sql_plan_dir_objects
        where extract(notes,'/obj_note/equality_predicates_only/text()').getStringVal()='YES'
          and extract(notes,'/obj_note/simple_column_predicates_only/text()').getStringVal()='YES'
        and object_type='TABLE'
    )
    group by owner,object_name,directive_id
    ) using (owner,table_name,columns)
   order by owner,table_name,columns
  ;
This is just the first draft. I'll probably improve it when needed and your comments on that blog will help.

Example

Here is an exemple of the output:

OWNER  TABLE_NAME                COLUMNS             EXTENSION_ DIRECTIVE_ID INTERNAL_
------ ------------------------- ------------------- ---------- ------------ ---------
STE1SY AUTOMANAGE_STATS          TYPE                             1.7943E+18 NEW
STE1SY CHANGELOG                 NODEID,NODETYPE                  2.2440E+18 PERMANENT
...
SYS    AUX_STATS$                SNAME                            9.2865E+17 HAS_STATS
SYS    CDEF$                     OBJ#                             1.7472E+19 HAS_STATS
SYS    COL$                      NAME                             5.6834E+18 HAS_STATS
SYS    DBFS$_MOUNTS              S_MOUNT,S_OWNER     SYS_NC0000
SYS    ICOL$                     OBJ#                             6.1931E+18 HAS_STATS
SYS    METANAMETRANS$            NAME                             1.4285E+19 MISSING_S
SYS    OBJ$                      NAME,SPARE3                      1.4696E+19 NEW
SYS    OBJ$                      OBJ#                             1.6336E+19 HAS_STATS
SYS    OBJ$                      OWNER#                           6.3211E+18 PERMANENT
SYS    OBJ$                      TYPE#                            1.5774E+19 PERMANENT
SYS    PROFILE$                  PROFILE#                         1.7989E+19 HAS_STATS
SYS    SCHEDULER$_JOB            JOB_STATUS          SYS_NC0006
SYS    SCHEDULER$_JOB            NEXT_RUN_DATE       SYS_NC0005
SYS    SCHEDULER$_WINDOW         NEXT_START_DATE     SYS_NC0002
SYS    SYN$                      OBJ#                             1.4900E+19 HAS_STATS
SYS    SYN$                      OWNER                            1.5782E+18 HAS_STATS
SYS    SYSAUTH$                  GRANTEE#                         8.1545E+18 PERMANENT
SYS    TRIGGER$                  BASEOBJECT                       6.0759E+18 HAS_STATS
SYS    USER$                     NAME                             1.1100E+19 HAS_STATS
SYS    WRI$_ADV_EXECUTIONS       TASK_ID                          1.5494E+18 PERMANENT
SYS    WRI$_ADV_FINDINGS         TYPE                             1.4982E+19 HAS_STATS
SYS    WRI$_OPTSTAT_AUX_HISTORY  SAVTIME             SYS_NC0001
SYS    WRI$_OPTSTAT_HISTGRM_HIST SAVTIME             SYS_NC0001

Conclusion

Because SPD are quite new, I'll conclude with a list of questions:

  • Do you still need extended stats when a SPD is in PERMANENT state?
  • Do you send to developers the list of extended stats for which SPD is in HAS_STATS, so that they integrate them in their data model? Then, do you drop the SPD when new version is released or wait for retention?
  • When you disable a SPD and an extended statistic is created, do you re-enable the SPD in order to have it in HAS_STAT?
  • Having too many extended statistics have an overhead during statistics gathering (especially when having histograms on them). But it helps to have better estimations. Do you think that having a lot of HAS_STATS is a good thing or not?
  • Having too many usable (MISSING_STATS or PERMANENT) SPD has an overhead during optimization (dynamic sampling) . But it helps to have better estimations. Do you think that having a lot of PERMANENT is a good thing or not?
  • Do you think that only bad data models have a lot of SPD? Then why SYS (the oldest data model optimized at each release) is the schema with most SPD?
  • Do you keep your SQL Profiles when upgrading, or do you think that SPD can replace most of them.

Don't ignore them. SQL Plan Directive is a gread feature but you have to manage them.

Monitoring BRM Host Processes using Metric Extension in EM12c

Arun Bavera - Wed, 2015-05-13 10:01
image

#/bin/sh
export CURRENT_USER=brm
#echo 'PROCESS_NAME'  'COUNT'
for p in dm_oracle cm dm_aq dm_ifw_sync wirelessRealtime.reg
do
CNT=`ps -ef | grep ${CURRENT_USER} | grep ${p} | grep -v grep | wc -l`
echo ${p} '|' ${CNT}
done

image
Categories: Development

Using VSS snapshots with SQL Server - part I

Yann Neuhaus - Wed, 2015-05-13 09:55

 

This is probably a series of blog posts about some thoughts concerning VSS snapshots with database servers. Let’s begin with this first story:

Some time ago, I implemented a backup strategy at one of my customers based on FULL / DIFF and log backups. No issues during a long time but one day a call of my customer who told me that since some days, the differential backup didn’t work anymore with the following error message:

 

Msg 3035, Level 16, State 1, Line 1 Cannot perform a differential backup for database "demo", because a current database backup does not exist. Perform a full database backup by reissuing BACKUP DATABASE, omitting the WITH DIFFERENTIAL option. Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally.

 

After looking at the SQL Server error log message I was able to find out some characteristic entries:

 

I/O is frozen on database demo. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.

...

I/O was resumed on database demo. No user action is required.

 

Just in case, did you have implemented a snapshot of your database server? And effectively the problem came from the implementation of Veeam backup software for bare-metal recovery purpose. In fact after checking out the Veeam backup software user guide, I noticed that my customer forgot to switch the transaction log option value to the “perform backup only” with application-aware image processing method.

This is a little detail that makes the difference here. Indeed, in this case, Veeam backup software relies on VSS framework and using process transaction log option doesn’t preserve the chain of full/differential backup files and transaction logs. For those who like internal stuff you can interact with the VSS writers by specifying some options during the initialization of the backup dialog. The requestor may configure VSS_BACKUP_TYPE option by using the IVssBackupComponents interface and SetBackupState method.

In this case, configuring the “perform backup only” means that Veeam backup software will specify to the SQL writer to use the option VSS_BT_COPY rather than VSS_BT_FULL to preserve the log of the databases. There are probably other specific tools that will run on the same way, so you will have to check outeach related user guide.

Let’s demonstrate the kind of issue you mayface in this case.

First let’s perform a full database backup as follows:

 

BACKUP DATABASE demo TO DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\demo.bak' WITH INIT, STATS = 10;

 

Next, let’s take a snapshot. If you take a look at the SQL Server error log you will find the related entries that concern I/O frozen an I/O resume operations for your databases.

Moreover, thereis another way to retrieve snapshot events. Let’s have a look at the msdb.dbo.backupset table. You can identify a snapshot by referring to the is_snapshot column value

 

USE msdb; GO   SELECT        backup_start_date,        backup_finish_date,        database_name,        backup_set_id,        type,        database_backup_lsn,        differential_base_lsn,        checkpoint_lsn,        first_lsn,        last_lsn,        is_snapshot FROM msdb.dbo.backupset WHERE database_name = 'demo' order by backup_start_date DESC;

 

blog_43_-_1_-_backup_history

 

… and this time the backup failed with the following error message:

 

Msg 3035, Level 16, State 1, Line 1 Cannot perform a differential backup for database "demo", because a current database backup does not exist. Perform a full database backup by reissuing BACKUP DATABASE, omitting the WITH DIFFERENTIAL option. Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally.

 

In fact, the differential database backup relies on the last full database backup (most recent database_backup_lsn value) which is a snapshot and a non-valid backup in this case.

Probably the best advice I may provide here is to double check potential conflicts you may get from your existing backup processes and additional stuff like VSS snapshots. The good thing is that one of my other customersthat uses Veeam backup software was aware of this potential issue but we had to deal with other interesting issue. I will discuss about it to the next blog post dedicated to VSS snapshots.

PeopleTools CPU analysis and supported versions of PeopleTools (update for April 2015 CPU)

PeopleSoft Technology Blog - Wed, 2015-05-13 09:30

Questions often arise on the PeopleTools versions for which Critical Patch Updates have been published, or if a particular PeopleTools version is supported. 

The attached page shows the patch number for PeopleTools versions associated with a particular CPU publication. This information will help you decide which CPU to apply and when to consider upgrading to a more current release.

The link in "CPU Date" goes to the landing page for CPU advisories, the link in the individual date, e.g. Apr-10, goes to the advisory for that date.

The page also shows the CVE's addressed in the CPU, a synopsis of the issue and the Common Vulnerability Scoring System (CVSS) value.

To find more details on any CVE, simply replace the CVE number in the sample URL below.

http://www.cvedetails.com/cve/CVE-2010-2377

Common Vulnerability Scoring System Version 2 Calculator

http://nvd.nist.gov/cvss.cfm?calculator&adv&version=2

This page shows the components of the CVSS score

Example CVSS response policy http://www.first.org/_assets/cvss/cvss-based-patch-policy.pdf

All the details in this page are available on My Oracle Support and public sites.

The RED column indicates the last patch for a PeopleTools version and effectively the last support date for that version.

Applications Unlimited support does NOT apply to PeopleTools versions.

Expand swap using SSM

Darwin IT - Wed, 2015-05-13 06:24
The mere reason that I dug into SSM yeasterday was that I wanted to install the Oracle Database 12c.

(Did you know yesterday came from the word 'yeast'? So actually yeasterday: because one used the yeast of the day before to bake the bread of today. Also in Dutch  the word for yeast: 'gist', still sounds in the word for yeasterday: 'gisteren'.)

I ran however against the prerequisite check on the swap space that was only 2GB because of my default OL7 install. And the Universal Installer required 8GB at least. So I needed to expand it. There are several ways to do it. But since I was into SSM, it was a good practice to use that. And it turns out very simple to do. It shows how easy it is to add a new device to a pool and an existing volume.

So I created a new disk of 8GB to my VM (I only need 8GB, but I thought I'd simply add it to the existing 2GB, to be certain to have enough with 10GB).


So, after booting up, verify existence of non assigned device (/dev/sdc):
[root@darlin-vce-db ~]# ssm list
--------------------------------------------------------------
Device Free Used Total Pool Mount point
--------------------------------------------------------------
/dev/sda 20.00 GB PARTITIONED
/dev/sda1 500.00 MB /boot
/dev/sda2 40.00 MB 19.47 GB 19.51 GB ol
/dev/sdb 0.00 KB 100.00 GB 100.00 GB pool01
/dev/sdc 8.00 GB
--------------------------------------------------------------
-----------------------------------------------------
Pool Type Devices Free Used Total
-----------------------------------------------------
ol lvm 1 40.00 MB 19.47 GB 19.51 GB
pool01 lvm 1 0.00 KB 100.00 GB 100.00 GB
-----------------------------------------------------
---------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
---------------------------------------------------------------------------
/dev/ol/root
ol 17.47 GB xfs 17.46 GB 12.29 GB linear /
/dev/ol/swap
ol 2.00 GB linear
/dev/pool01/disk01
pool01 100.00 GB xfs 99.95 GB 99.95 GB linear /u01
/dev/sda1
500.00 MB xfs 496.67 MB 305.97 MB part /boot
---------------------------------------------------------------------------

Then add the device to the 'ol'-pool:
[root@darlin-vce-db ~]# ssm add -p ol /dev/sdc
Physical volume "/dev/sdc" successfully created
Volume group "ol" successfully extended

And verify again:
[root@darlin-vce-db ~]# ssm list
--------------------------------------------------------------
Device Free Used Total Pool Mount point
--------------------------------------------------------------
/dev/sda 20.00 GB PARTITIONED
/dev/sda1 500.00 MB /boot
/dev/sda2 40.00 MB 19.47 GB 19.51 GB ol
/dev/sdb 0.00 KB 100.00 GB 100.00 GB pool01
/dev/sdc 8.00 GB 0.00 KB 8.00 GB ol
--------------------------------------------------------------
----------------------------------------------------
Pool Type Devices Free Used Total
----------------------------------------------------
ol lvm 2 8.04 GB 19.47 GB 27.50 GB
pool01 lvm 1 0.00 KB 100.00 GB 100.00 GB
----------------------------------------------------
---------------------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
---------------------------------------------------------------------------------------
/dev/ol/root ol 17.47 GB xfs 17.46 GB 12.29 GB linear /
/dev/ol/swap ol 2.00 GB linear
/dev/pool01/disk01 pool01 100.00 GB xfs 99.95 GB 99.95 GB linear /u01
/dev/sda1 500.00 MB xfs 496.67 MB 305.97 MB part /boot
---------------------------------------------------------------------------------------

Now resize the swap volume:
[root@darlin-vce-db ~]# ssm resize -s+8GB /dev/ol/swap
Size of logical volume ol/swap changed from 2.00 GiB (512 extents) to 10.00 GiB (2560 extents).
Logical volume swap successfully resized

And, again, verify:
[root@darlin-vce-db ~]# ssm list
--------------------------------------------------------------
Device Free Used Total Pool Mount point
--------------------------------------------------------------
/dev/sda 20.00 GB PARTITIONED
/dev/sda1 500.00 MB /boot
/dev/sda2 0.00 KB 19.51 GB 19.51 GB ol
/dev/sdb 0.00 KB 100.00 GB 100.00 GB pool01
/dev/sdc 36.00 MB 7.96 GB 8.00 GB ol
--------------------------------------------------------------
-----------------------------------------------------
Pool Type Devices Free Used Total
-----------------------------------------------------
ol lvm 2 36.00 MB 27.47 GB 27.50 GB
pool01 lvm 1 0.00 KB 100.00 GB 100.00 GB
-----------------------------------------------------
---------------------------------------------------------------------------------------
Volume Pool Volume size FS FS size Free Type Mount point
---------------------------------------------------------------------------------------
/dev/ol/root ol 17.47 GB xfs 17.46 GB 12.29 GB linear /
/dev/ol/swap ol 10.00 GB linear
/dev/pool01/disk01 pool01 100.00 GB xfs 99.95 GB 99.95 GB linear /u01
/dev/sda1 500.00 MB xfs 496.67 MB 305.97 MB part /boot
---------------------------------------------------------------------------------------

Now check the swap space:
[root@darlin-vce-db ~]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 2097148 0 -1

Hey, it's still 2GB!

Let's check fstab to get the swap mount-definitions: 
[root@darlin-vce-db ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon May 11 20:20:14 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol-root / xfs defaults 0 0
UUID=7a285d9f-1812-4d72-9bd2-12e50eddc855 /boot xfs defaults 0 0
/dev/mapper/ol-swap swap swap defaults 0 0
/dev/mapper/pool01-disk01 /u01 xfs defaults 0 0


Turn off swap:
[root@darlin-vce-db ~]# swapoff /dev/mapper/ol-swap

And (re-)create new swap:
[root@darlin-vce-db ~]# mkswap -c /dev/mapper/ol-swap
0 bad pages
mkswap: /dev/mapper/ol-swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 10485756 KiB
no label, UUID=843463de-7552-4a73-84a6-761f261d9e9f

Then enable swap again:
[root@darlin-vce-db ~]# swapon /dev/mapper/ol-swap

And check swap again:
[root@darlin-vce-db ~]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 10485756 0 -1

Yes!!! That did the job. Easy does it...

APEX 5.0: Upgrade to the newest FontAwesome Icon Library

Patrick Wolf - Wed, 2015-05-13 03:39
Oracle APEX 5.0 ships with FontAwesome version 4.2.0 which will automatically be loaded if your application is using the Universal Theme. This makes it super easy to add nice looking icons to your buttons, lists and regions. But how can you integrate the most … Continue reading →
Categories: Development

News SharePoint 2016: new alternative to InfoPath Forms

Yann Neuhaus - Wed, 2015-05-13 01:26

infopath_logoMicrosoft announced in January 2015 that it was the END OF INFOPATH, that the 2013 version would be the last one. However, Microsoft updated the Infopath 2013 app will work with SharePoint Server 2016.
Following the new users needs, Microsoft decided InfoPath wasn't suited for the job, that's why Microsoft won't release a new version but only propose an alternative.

 

 

 

what
What is InfoPath Forms?

InfoPath is used to create forms to capture information and save the contents as a file on a PC or on a web server when hosted on SharePoint. InfoPath forms can submit to SharePoint lists and libraries, and submitted instances can be opened from SharePoint using InfoPath Filler or third-party products.

 

 

InfoPath provides several controls:

  • Rules
  • Data validation
  • Conditional Formatting
  • XPath Expression and Functions
  • Connection to external Datasources: SQL, Access, SharePoint
  • Coding languages: C#, Visual Basic, Jscript, HTML
  • User Roles
InfoPath History

Microsoft InfoPath is an application for designing, distributing, filling and submitting electronic forms containing structured data.
Microsoft initially released InfoPath as part of Microsoft Office 2003 family.

VERSION INCLUDED IN... RELEASE DATE InfoPath 2003 Microsoft Office 2003 Professional and Professional Enterprise November 19, 2003 InfoPath 2007 Microsoft Office 2007 Ultimate, Professional Plus and Enterprise January 27, 2007 InfoPath 2010 Microsoft Office 2010 Professional Plus; Office 365 July 15, 2010 InfoPath 2013 Microsoft Office 2013 Professional Plus; Office 365 January 29, 2013

 

In other words, an InfoPath Form is helping you to define some design, rules, data, connections, etc…

why What will happen with SharePoint 2016? Which alternative?

Because of the new user’s perspective about their needs: design, deployment, intelligence, all integrated between servers, services and clients.
Microsoft would like to present a tools available on Mobiles, Tablets and PCs, this due to the SharePoint Online, Windows 8 (almost 10), Windows Phone and Office 365.

 

 

Image-13 Solutions:

Customized Forms in SharePoint using .Net language: easy to use with Visual Studio, but a developer or a SharePoint developer is needed.

Nintex Forms: users can easily build custom forms and publish them to a SharePoint environment.

 

What is Nintex Forms

Nintex Forms is a web-based designer that enables forms to be created within SharePoint quickly and easily. Forms can then be consumed on most common mobile devices using internet, anywhere at anytime. Nintex Forms integrates seamlessly with Nintex Workflow to automate business processes and deliver rich SharePoint applications.

Learn more about nintex: http://www.nintex.com/

CONCLUSION

Let’s see what will be announced but I think Nintex will find its way as a great alternative for InfoPath:

  • No speciific knowledge is needed to build forms (HTML or JavaScript)
  • No client application needed
  • Nintex is completely web-based
  • Works mobile devices


Using JVMD with Oracle Utilities Applications - Part 1 Online

Anthony Shorten - Tue, 2015-05-12 17:44

One of the major advantages of the Oracle WebLogic Server Management Pack Enterprise Edition is the JVM Diagnostics (JVMD) engine. This tool allows java internals from JVM's to be sent to Oracle Enterprise Manager for analysis. It has a lot of advantages:

  • It provides class level diagnostics for all classes in executed including base and custom classes.
  • It provided end to end diagnostics when the engine is deployed with the application and the database.
  • It has minimal impact on performance as the engine uses the JVM monitoring API's in memory.

It is possible to use JVMD with Oracle Utilities Application Framework in a number of ways:

  • It is possible to deploy JVMD agent to the WebLogic servers used for the Online and Web Services tiers.
  • It is possible to deploy the JVMD database agent to the database to capture the code execution against the database.
  • It is possible to use standalone JVMD agent within threadpoolworkers to gather diagnostics for batch.

This article will outline the general process for deploying JVMD on the online servers. The other techniques will be discussed in future articles.

The architecture of JVMD can be summarized as follows:

  • JVMD Manager - A co-ordination and collection node that collates JVM diagnostic information sent by JVM Agents attached to JVM's. This manager exposes the information to Oracle Enterprise Manager. The Manager can be installed within an OMS, standalone and multiple JVM Managers are supported to support large networks of agents.
  • JVMD Agents - A small java based agent that is deployed within a JVM it is monitoring that collects Java diagnostics (primarily from memory, to minimize performance impact of collection) and sends them to a JVMD Manager. Each agent is hardwired to a particular JVMD Manager. JVMD Agents can be deployed to J2EE containers, standalone JVM's and the database.

The diagram below illustrates this architecture:

Before starting the process, ensure that the Oracle WebLogic Server Management Pack Enterprise Edition is licensed and installed (manually or via Self Update).

  • Install the JVMD Manager - Typically the JVMD Manager is deployed to the OMS Server but can also be deployed standalone and/or multiple JVMD managers can be installed for larger numbers of targets to manage. There is a video from Oracle Learning Library on Youtube explaining how to do this step.
  • Deploy the JVMD Agent to the Oracle WebLogic Server housing the product online using the Middleware Management function within Oracle Enterprise Manager using the Application Performance Management option. This will add the Agent to your installation.  There is a process for deploying the agent automatically to a running WebLogic Server. Again there is a Youtube video describing this technique.

One the agent is installed the JVMD agent will start sending diagnostics of java code running within that JVM to Oracle Enterprise Manager.

Customers using the Oracle Application Management Pack for Oracle Utilities will see the JVMD link from their Oracle Utilities targets (it is also available from the Oracle WebLogic targets). For example:

JVMD accessible from Oracle Utilities Targets

Once selecting the Java Virtual Machine Pool for the server you will get access to the full diagnostics information.

JVMD Home Page

This include historical analysis

JVMD Historical Analysis

JVMD is a useful tool for identifying bottlenecks in code and in the architecture. In future articles I will add database diagnostics and batch diagnostics to get a full end to end picture of diagnostics.