Skip navigation.

Feed aggregator

Oracle Fusion Middleware (FMW) 11.1.1.9 now available : Documentation & Download

Online Apps DBA - Fri, 2015-05-15 13:19
Oracle Fusion Middleware (FMW) 11.1.1.9 is now available (released on 13th May 2015). Documentation for Oracle FMW 11.1.1.9 is at here You can download Oracle Fusion Middleware 11.1.1.9 from respective component   Following Fusion Middleware Components are release as part of Fusion Middleware 11.1.1.9 update Oracle Jdevloper & ADF Oracle Business Intelligence Enterprise Edition (OBIEE) […] The post Oracle Fusion Middleware (FMW)...

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

Migration of QlikView Services

Yann Neuhaus - Fri, 2015-05-15 12:00


QlikView is a product of the software company named Qlik (previously known as QlikTech) which was designed to provide business intelligence & visualization capabilities using reports or dashboards. The QlikView solution is in fact composed of some components just like a Documentum environment (Content Server, WebServer, JMS, aso...). So QlikView is composed of three main components:

  • QlikView Server: the core of the QlikView solution which provide access to the QlikView Documents for the users
  • QlikView Publisher: an entity used to automatically reload and manage more efficiently the QlikView Documents
  • QlikView Desktop: a development tool that can be used to build your reports/dashboards


Based on these descriptions, the only element that is needed on all environments is the QlikView Server. Of course at a certain point of your project, you may also need a QlikView Desktop to build your reports/dashboards but once done, you just don't really need it anymore. The QlikView Publisher isn't necessary but it will definitively make your life easier.

 

I. Considerations


To be more precise, QlikView doesn't just provide some components but it provides some Services. In fact, each Service is represented by a Windows Service and it can be seen as a specific role with dedicated features. In the QlikView world, an upgrade or a migration is almost the same thing. The main difference is that some elements may change between two main releases of QlikView: the path of a folder, the .NET Framework used, aso... So if you plan to upgrade or migrate your installation (or a part of your installation), then the most important thing to understand is probably that you need to take care of each Service, one Service at a time.


To improve the performance of QlikView, the QlikView Server is designed to mainly use the RAM to store QlikView Documents because the access to the Memory is way more faster than the access to the hard drive. For example, when a user opens a QlikView Document (using a browser) of 1GB (size of the document), then 4GB or RAM are used to store the document in the Memory. Each additional user that will access this QlikView Document will increase this amount by 40% of the document's size (+ 400 MB of RAM per user). On the other side, the QlikView Publisher is designed to use CPUs for its calculations, aso...


When using QlikView in a small company or with a small number of users, installing all QlikView Services in one Windows Server is often sufficient. A Windows Server with 64, 128 or 256GB or RAM and 16, 32 or 64 CPUs is something quite "normal" for QlikView. However, if your QlikView environment starts to show some weaknesses (jobs failure, locked tasks, QMC not responding, aso...) then it's probably the time to do something... Because the QlikView Server and Publisher handle the Memory and CPU consumption in a very different way, a best practice is always to separate them but for small companies it may not be necessary.


II. Migration of Services - Theory


Because of this last consideration, in the two remaining parts of this post I will try to explain how to separate the QlikView Services as it is recommended. So what are the Services provided by QlikView?

  • QlikView Server
  • QlikView WebServer (when using the WebServer of QlikView
  • QlikView Settings Service (when using IIS)
  • QlikView Distribution Service
  • QlikView Management Service
  • QlikView Directory Service Connector


You can also have some additional Services according to what have been installed on the QlikView environment like the QlikView Offline Service (offline access via a mobile/tablet) or the QlikView SAP Network Server. The best practice is generally to do the following:

Server 1 - focus on RAM

  • QlikView Server
  • QlikView WebServer


Server 2 - focus on CPU

  • QlikView Management Service
  • QlikView Directory Service Connector
  • QlikView Distribution Service


III. Migration of Services


The general procedure to migrate a QlikView Service from Server 1 to Server 2 is always the same but some steps differs a little bit for a specific Service. Remember that the best thing to do is always to do one service at a time and to check that QlikView is still working properly between each migration. So an overview of this procedure would be:

  1. Installation of the QlikView Service on Server 2
  2. Configuration
  3. Uninstallation of the QlikView Service on Server 1



Installation of the QlikView Service on Server 2


The installation of a Service on a separate server during a migration is a quite simple step:

  1. Stop the QlikView Service on Server 1
  2. Run the QlikView Server installer: QlikViewServer_x64Setup.exe (64 bits Windows Server 2k8) or QlikViewServer_Win2012andUp.exe (64 bits Windows Server 2012)
  3. On the screen to specify what should be installed, always choose the "Custom" proposal and then check which QlikView Service should be installed.
  4. Reboot the Server 2



Configuration


The configuration part is quite simple since there is only one task that should be executed: change a URL. This task is the same for all QlikView Services except one: the QlikView Management Service. For all other QlikView Services, here is what should be done:

  1. Open the QlikView Management Console using a browser. The default URL is something like: http://##Server_1_Hostname##:4780/qmc/SystemSetup.htm
  2. Expand the folder that correspond to the QlikView Service (e.g. "Distribution Services" for the QlikView Distribution Service)
  3. Click on the element inside this folder (e.g. "QDS@##Server_1_Hostname##" for the QlikView Distribution Service)
  4. Click on the "General" tab
  5. Change the URL value replacing ##Server_1_Hostname## with ##Server_2_Hostname##
  6. Click on "Apply" to save your changes


And that should be sufficient for QlikView to know that you installed the QlikView Service on another server.



So as said above, the configuration part is different for the QlikView Management Service. This Service is the one that takes care of the configuration on the QlikView Management Console. That's why it doesn't make much sense to change something on the QlikView Management Console that was just installed on another server (the installer knows on which server it was executed)...


So what should be done in case this QlikView Service has been installed on Server 2? Well it's also quite simple: almost all configurations of QlikView are stored in something they called the "QVPR Database" (QVPR for QlikView Repository). By default this QVPR Database is just a XML Repository but it can also be a SQL Server database. There is a setting on the QlikView Management Console (System > Setup > Management Service > Repository tab) that control if the QVPR Database should be backed-up or not (never, daily, every X minutes...) and there is even a button to "Backup Now" the configuration. The location of these backups if defined in this page too but if you want to open the real location of the XML Repository, you should take a look at "C:/ProgramData/QlikTech/ManagementService/QVPR/". So the configuration part for the QlikView Management Service consists of:

  1. Stop the QlikView Management Service on Server 2
  2. Copy the QVPR backup from Server 1 to Server 2
  3. Restore the QVPR backup to the Server 2 (replace existing files with the ones from the backup)
  4. Start the QlikView Management Service on Server 2
  5. Check if all configurations are OK



Uninstallation of the QlikView Service on Server 1


The uninstallation step is quite simple too... Everything is simple with QlikView, isn't it? ;)

  1. Run the QlikView Server installer: QlikViewServer_x64Setup.exe (64 bits Windows Server 2k8) or QlikViewServer_Win2012andUp.exe (64 bits Windows Server 2012)
  2. QlikView will detect that some components are already installed
  3. On the second screen, select "Modify"
  4. On the next screen, click on the QlikView Service and disable it ("This feature will not be available")
  5. Complete the "installation" process to uninstall the QlikView Service

 

Once the three QlikView Services have been migrated from Server 1 to Server 2, you should be able to see an improvment in the performances or at least less issues with the QlikView Server & Publisher! ;)

 

 

ASP.NET 5 IBM Bluemix Demo

Pas Apicella - Fri, 2015-05-15 05:55
The following demo is using the same code IBM Bluemix created when using the Experimental ASP .NET 5 runtime. This can be done using the CF CLI where we clone the project from jazzhub git repository.



Steps

1. Clone a sample project as follows

pas@Pass-MacBook-Pro:~/bluemix-apps/DOTNET$ git clone https://hub.jazz.net/git/pasapples/pas-donet-demo.git
Cloning into 'pas-donet-demo'...
remote: Counting objects: 24, done
remote: Finding sources: 100% (24/24)
remote: Total 24 (delta 0), reused 24 (delta 0)
Unpacking objects: 100% (24/24), done.
Checking connectivity... done.

2. cd pas-donet-demo

3. Edit manifest.yml to use a unique host name

applications:
- disk_quota: 1024M
  host: pas-donet-demo
  name: pas-donet-demo
  path: .
  domain: mybluemix.net
  instances: 1
  memory: 256M

4. Deploy as follows

pas@Pass-MacBook-Pro:~/bluemix-apps/DOTNET/pas-donet-demo$ cf push -f manifest.yml
Using manifest file manifest.yml

Creating app pas-donet-demo in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

Creating route pas-donet-demo.mybluemix.net...
OK

Binding pas-donet-demo.mybluemix.net to pas-donet-demo...
OK

Uploading pas-donet-demo...
Uploading app files from: .
Uploading 19.9K, 15 files
Done uploading
OK

Starting app pas-donet-demo in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
-----> Downloaded app package (20K)

  ************************************************************************
  * WARNING: This is an experimental buildpack. It is not supported.     *
  *          Do not expect it to work reliably. Please, do not           *
  *          contact support about issues with this buildpack.           *
  ************************************************************************
.
-----> Extracting mono
Using mono mono-lucid64-3.12.1.tar.gz
       OK
-----> Adding Nowin.vNext
       Copied 3 files from /var/vcap/data/dea_next/admin_buildpacks/2b638599-b3da-44e9-86f0-6b2f513daa4f_87e6c7503171fc3d6db9055873938657ca3ea6c6/resources/Nowin.vNext to /tmp/staged/app/src
       OK

....

       Total time 361ms
       OK
-----> Moving files in to place
       Copied 1865 files from /app/mono to /tmp/staged/app
       OK
-----> Saving to buildpack cache
       Copied 628 files from /tmp/staged/app/.k to /tmp/cache
       OK
-----> Writing Release YML
       OK

-----> Uploading droplet (136M)

0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-donet-demo was started using this command `cd src/samplemvc; sleep 999999 | k cf-web`

Showing health and status for app pas-donet-demo in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: pas-donet-demo.mybluemix.net
last uploaded: Fri May 15 11:38:29 UTC 2015

     state     since                    cpu    memory           disk           details
#0   running   2015-05-15 09:40:30 PM   0.1%   168.9M of 256M   369.8M of 1G


5. Finally access the application using the route as shown below.

eg: http://pas-donet-demo.mybluemix.net



http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

tmux - an alternative to screen

Yann Neuhaus - Fri, 2015-05-15 00:37

You may already use screen for multiplexing a terminal. This is especially useful when you want to start long running processes on a server and do not want to loose the connection because of a ssh connection timeout, firewall settings or other reasons. With screen the session keeps running even if you got disconnected somehow and you may re-attach to the screen session at any point later.

Matching SQL Plan Directives and queries using it

Yann Neuhaus - Thu, 2015-05-14 13:45

This is another blog post I'm writing while reviewing the presentation I'm doing next week for SOUG and next month for DOAG. You have SQL Plan Directives used by several queries that have same kind of predicates. And queries that can use several SPD. Can you match them?

When a query uses a SPD (meaning that the SPD in usable state - NEW, MISSING_STATS or PERMANENT internal state) the execution plan show it as:

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement
but you don't have information about which directive(s).

Unfortunately that information is not stored in V$SQL_PLAN information. There are two ways to get information:

  • Parse it and trace it with set events 'trace [SQL_Plan_Directive.*]' but that's for another post.
  • Do an EXPLAIN PLAN and info is in PLAN_TABLE.OTHER_XML

example

Here are the SQL Plan Directives I have:

SQL> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_d
irectives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner='DEMO
' ) order by created;

           DIRECTIVE_ID TYPE             STATE      REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES                                                                                      CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
   11092019653200552215 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
.spd_note.                                                                                 21:21:58
  .internal_state.HAS_STATS./internal_state.
  .redundant.NO./redundant.
  .spd_text.{EC(DEMO.DEMO_TABLE)[A, B, C, D]}./spd_text.
./spd_note.
21:30:09 21:30:09

    9695481911885124390 DYNAMIC_SAMPLING USABLE     SINGLE TABLE CARDINALITY MISESTIMATE
.spd_note.                                                                                 21:35:45
  .internal_state.NEW./internal_state.
  .redundant.NO./redundant.
  .spd_text.{E(DEMO.DEMO_TABLE)[A, B, C, D]}./spd_text.
./spd_note.
(I changed the xml tag because our current blog platform is a bit creative with them... fortunately we are migrating soon to wordpress)

+metrics

So in order to have more information, you have to re-parse the statement with EXPLAIN PLAN FOR... and show it with DBMS_XPLAN.DISPLAY witht he format '+METRICS'

SQL> explain plan for select * from DEMO_TABLE where a+b=c+d;

Explained.
This query will use the {E(DEMO.DEMO_TABLE)[A, B, C, D]} directive but not the {EC(DEMO.DEMO_TABLE)[A, B, C, D]} one because it's not simple columns predicates.
Let's get the execution plan from PLAN_TABLE with the +METRICS format:
SQL> select * from table(dbms_xplan.display(null,null,'+metrics'));

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 4063024151

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            |  1000 | 12000 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| DEMO_TABLE |  1000 | 12000 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("A"+"B"="C"+"D")

Sql Plan Directive information:
-------------------------------

  Used directive ids:
    9695481911885124390

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement

As you can see, in addition to the number of SPD used you have the DIRECTIVE ID.

conclusion

It's not easy to match all queries that can use a SQL Plan Directive, but you can do it on the other way: do an explain plan for each query you suspect and check the notes. If you are ready to parse a lot of queries, you can also do it automatically.

Kerberos configuration for DFS 6.7 SP1

Yann Neuhaus - Thu, 2015-05-14 13:30


In my last post, I explained how to configure Kerberos for a CS 6.7 SP1. Unfortunately if you only configure the Content Server, it will almost be useless... If you want this configuration to be useful, then you will also have to configure the Kerberos SSO for the Documentum Foundation Services (DFS). That's why in this blog post I will describe step by step what need to be done for that purpose.


So what are the pre-requisites to setup the Kerberos SSO for the Documentum Foundation Services? Well of course you will need an application server for your DFS, a Content Server that is already installed and an Active Directory to generate the keytab(s). Just to let you know, I used (for the DFS) a Tomcat application server that is on the Content Server's machine and an Active Directory on a Windows Server 2008 R2. Let's define the following properties:

  • Active Directory - user = dfskrb
  • Active Directory - password = ##dfskrb_pwd##
  • Active Directory - domain = DOMAIN.COM
  • Active Directory - hostname1 = adsrv1.domain.com
  • Active Directory - hostname2 = adsrv2.domain.com
  • Alias of the DFS' host = csdfs.domain.com (can be a Load Balancer alias)
  • $CATALINA_HOME = /opt/tomcat


I. Active Directory prerequisites


As always when working with Kerberos on an Active Directory, the first thing to do is to create a user. So let's create this user with the following properties:

  • User name: dfskrb
  • Support AES 128 bits encryption
  • This account MUST NOT support AES 256 bits encryption. I set it that way because the Content Server doesn't support AES 256 bits encryption so I disabled it for the DFS part too.
  • Trust for Delegation to any service (Kerberos Only)
  • Password never expires
  • Account never expires
  • Account not locked


Once the user has been created, you can proceed with the keytab creation using the comment prompt on the Active Directory host:

dfs_keytab.png


For the Content Server part, the name of the "princ" (SPN) has to be "CS/##repository_name##". For the DFS part, the EMC documentation ask you to generate a keytab with a SPN that is "DFS/##dfs_url##:##dfs_port##". In fact, if you are going to use only one DFS url/port, then you don't need to add the port in the SPN of the DFS.


Regarding the name of the keytab, for the Content Server part, it has to be "##repository_name##.keytab" for the Content Server to be able to automatically recognize it during the server re-initialization. For the DFS part, the name of the keytab isn't important because you will have to configure it manually.


II. Configuration of the Documentum Foundation Services side


So let's start the configuration of the Kerberos SSO for the DFS. The first thing to do is of course to transfer the keytab created previously (dfs.keytab) from the Active Directory to the host of the DFS (a Linux in my case). There are no specific locations for this keytab so you just have to put it somewhere and remember this location. For this example, I will create a folder that will contain all elements that are required. Please make sure that the keytab belongs to the Documentum installation owner (user and group) on the file system with the appropriate permissions (640).

[dmadmin ~]$ echo $CATALINA_HOME
/opt/tomcat
[dmadmin ~]$ mkdir /opt/kerberos
[dmadmin ~]$ mv ~/dfs.keytab /opt/kerberos
[dmadmin ~]$ chmod 640 /opt/kerberos/dfs.keytab


Create the file "/opt/kerberos/jaasDfs.conf" with the following content:

[dmadmin ~]$ cat /opt/kerberos/jaasDfs.conf
DFS-csdfs-domain-com {
  com.sun.security.auth.module.Krb5LoginModule required
  debug=true
principal="DFS/document.write(['csdfs.domain.com','DOMAIN.COM'].join('@'))"
realm="DOMAIN.COM"
  refreshKrb5Config=true
  noTGT=true
  useKeyTab=true
  storeKey=true
  doNotPrompt=true
  useTicketCache=false
  isInitiator=false
  keyTab="/opt/kerberos/dfs.keytab";
};


The first line of this file jaasDfs.conf is the name of the "module". This name is derived from the SPN (or principal) of the DFS: take the SPN, keep the uppercase/lowercase characters, remove the REALM (everything that is after the at-sign (included)) and replaced all special characters (slash, back-slash, point, colon, aso...) with a simple dash "-".


The next thing to do is to modify the DFS war file. So let's create a backup of this file and prepare its modification:

[dmadmin ~]$ cd $CATALINA_HOME/webapps/
[dmadmin ~]$ cp emc-dfs.war emc-dfs.war.bck_$(date +%Y%m%d)
[dmadmin ~]$ cp emc-dfs.war /tmp/
[dmadmin ~]$ cd /tmp
[dmadmin ~]$ unzip emc-dfs.war -d /tmp/emc-dfs


The setup of the Kerberos SSO requires some jar files that aren't necessarily present on a default installation. For that purpose, you can copy these jar files from the Content Server to the new dfs:

[dmadmin ~]$ cp $DOCUMENTUM/product/dfc6.7/dfc/jcifs-krb5-1.3.1.jar /tmp/emc-dfs/WEB-INF/lib/
[dmadmin ~]$ cp $DOCUMENTUM/product/dfc6.7/dfc/krbutil.jar /tmp/emc-dfs/WEB-INF/lib/
[dmadmin ~]$ cp $DOCUMENTUM/product/dfc6.7/dfc/vsj-license.jar /tmp/emc-dfs/WEB-INF/lib/
[dmadmin ~]$ cp $DOCUMENTUM/product/dfc6.7/dfc/vsj-standard-3.3.jar /tmp/emc-dfs/WEB-INF/lib/
[dmadmin ~]$ cp $DOCUMENTUM/product/dfc6.7/dfc/questFixForJDK7.jar /tmp/emc-dfs/WEB-INF/lib/


Once done, a Kerberos handler must be added to the DFS. For that purpose, open the file authorized-service-handler-chain.xml, locate the XML comment that start with "Any handler using ContextFactory" and add the following lines just before this comment:

authorization_chain.png


Then, some Kerberos specific configurations must be added to the web.xml file. For that purpose, open this file and add the following lines at the end, just before the web-app end tag (before the last line):

web_xml.png


In the above configuration, only the "env-entry-value" for each "env-entry" section should be changed to match your environment. As you can see, the krb5.conf file referenced here is in /opt/kerberos. You can use the same krb5.conf file as the one used for the Content Server or you can specify a separate file. As this file can be the same for the Content Server and the DFS I will not set it here but just check my last post to get more information about that.


So the configuration is now over and you can just repackage and re-deploy the new DFS:

[dmadmin ~]$ cd /tmp/emc-dfs/
[dmadmin ~]$ jar -cvf emc-dfs.war *
[dmadmin ~]$ $CATALINA_HOME/bin/shutdown.sh
[dmadmin ~]$ mv emc-dfs.war $CATALINA_HOME/webapps/
[dmadmin ~]$ cd $CATALINA_HOME
[dmadmin ~]$ rm -Rf emc-dfs/
[dmadmin ~]$ $CATALINA_HOME/bin/startup.sh


Once done, the Tomcat application server should have been started and the new version of the DFS WAR file should have been deployed. If the Content Server and the DFS are properly setup to use the Kerberos SSO, then you should be able to execute a .NET or Java code to get a Kerberos Ticket for the DFS and work with the DFS features.


What’s New in OBIEE 11.1.1.9 for Systems Administrators and Developers

Rittman Mead Consulting - Thu, 2015-05-14 11:42

After over two years since the last major release of OBIEE, Oracle released version 11.1.1.9 in May 2015. You can find the installers here and documentation here. 11.1.1.9 is termed the “terminal release” of the 11g line, and the 12c version is already out in closed-beta. We’d expect to see patchsets for 11g to continue for some time covering bugs and any security issues, but for new functionality in 11g I would hazard a guess that this is pretty much it as Oracle concentrate their development efforts on OBIEE 12c and BICS, particularly Visual Analyser.

For both the end user and backend administrator/developer, OBIEE 11.1.1.9 has brought with it some nice little touches, none of which are going to revolutionise the OBIEE world but many of which are going to make life with the tool just that little bit smoother. In this article we take a look at what 11.1.1.9 brings for the sysadmin & developer.

BI Server Query Instrumentation and Usage Tracking

There are some notable developments here:

  1. Millisecond precision when logging events from the BI Server
  2. Usage Tracking now includes the physical query hash, which is what is also visible in the database, enabling end-to-end tracing
  3. User sessions can be tracked and summarised more precisely because session ID is now included in Usage Tracking.
  4. The execution of initialisation blocks is now also recorded, in a new Usage Tracking table called S_NQ_INITBLOCK.
Millisecond precision in BI Server logs

OBIEE 11.1.1.9 writes the nqquery.log with millisecond precision for both the timestamp of each entry, and also the summary timings for a query execution (at last!). It also calls out explicitly “Total time in BI Server” which is a welcome addition from a time profiling/performance analysis point of view:

[2016-07-31T02:11:48.231-04:00 [...] Sending query to database named X0 - Airlines Demo Dbs (ORCL) (id: <<221516>>), connection pool named Aggr Connection, logical request hash 544131ec, physical request hash 5018e5db: [[  
[2016-07-31T02:12:04.31-04:00 [...] Query Status: Successful Completion  
[2016-07-31T02:12:04.31-04:00 [...] Rows 2, bytes 32 retrieved from database query id: <<221516>>  
[2016-07-31T02:12:04.31-04:00 [...] Physical query response time 2.394 (seconds), id <<221516>>  
[2016-07-31T02:12:04.31-04:00 [...] Physical Query Summary Stats: Number of physical queries 1, Cumulative time 2.394, DB-connect time 0.002 (seconds)  
[2016-07-31T02:12:04.31-04:00 [...] Rows returned to Client 2  
[2016-07-31T02:12:04.31-04:00 [...] Logical Query Summary Stats: Elapsed time 16.564, Total time in BI Server 16.555, Response time 16.564, Compilation time 0.768 (seconds), Logical hash 544131ec

One thing to notice here is the subsecond timestamp precision seems to vary between 2 and 3 digits, which may or may not be a bug.

Being able to see this additional level of precision is really important. Previously OBIEE recorded information by the second, which was fine if you were looking at query executions taking dozens of seconds or minutes – but hopefully our aspirations for systems performance are actually closer to the realms of seconds or subsecond. At this scale the level of precision in the timings really matters. On the assumption that OBIEE was rounding values to the nearest whole number, you’d see “0 seconds” for a Logical SQL compile (for example) that was maybe 0.499 seconds. Per query this is not so significant, but if those queries run frequently then cumulatively that time stacks up and would be useful to be properly aware of and target with optimisation if needed.

Usage Tracking changes

Usage Tracking has five new columns for each logical query recorded in S_NQ_ACCT:

  • ECID
  • TENANT_ID
  • SERVICE_NAME
  • SESSION_ID
  • HASH_ID

The presence of SESSION_ID is very useful, because it means that user behaviour can be more accurately analysed. For example, within a session, how many reports does a user run? What is the median duration of a session? Note that the session here is the session as seen by the BI Server, rather than Presentation Services.


ECID is also very useful for being able to link data in Usage Tracking back to more detailed entries in nqquery.log. Note that an ECID is multipart and concanated with RID and you won’t necessarily get a direct hit on the ECID you find in Usage Tracking with that in nqquery.log, but rather a substring of it. In this example here the root ECID is 11d1def534ea1be0:20f8da5c:14d4441f7e9:–8000–0000000000001891,0:1:103 and the varying component of the relationship (RID) id 1 and 3 respectively:

Usage Tracking:

select ecid,session_id,start_dt,start_hour_min ,saw_src_path from biee_biplatform.s_nq_acct

sa50208

nqquery.log:

[2015-05-12T08:58:38.704-04:00] [...] [ecid: 11d1def534ea1be0:20f8da5c:14d4441f7e9:-8000-0000000000001891,0:1:103:3] [...]  
-------------------- SQL Request, logical request hash:  
3fabea2b  
SET VARIABLE QUERY_SRC_CD='Report',SAW_DASHBOARD='/shared/02. Visualizations/_portal/2.11 Table Designs',SAW_DASHBOARD_PG='Conditional Format',SAW_SRC_PATH='/shared/02. Visualizations/Configured Visuals/Conditional Formats/CF based on a hidden column',PREFERRED_CURRENCY='USD';SELECT^M  
   0 s_0,^M  
[...]

In the above example note how the absence of a timezone in the Usage Tracking data is an impedance to accurate interpretation of the results, compared to nqquery.log which has a fully qualified timezone offset.

Usage Tracking changes – Physical Hash ID

As well as additions to the logical query table, there are two new columns for each physical query logged in S_NQ_DB_ACCT:

  • HASH_ID
  • PHYSICAL_HASH_ID

The implications of this are important – there is now native support in OBIEE for tracing OBIEE workloads directly down to the database (as discussed for OBIEE < 11.1.1.9 here), because the PHYSICAL_HASH_ID is what OBIEE sets as the ACTION field when it connects to the database and is available in Oracle through both AWR, V$ views, and DBMS_MONITOR. For example, in V$SESSION the ACTION field is set to the physical hash:

SQL> select username,program,action 
  from v$session where lower(program) like 'nqs%';

USERNAME PROGRAM                                          ACTION  
-------- ------------------------------------------------ ---------  
BISAMPLE nqsserver@demo.us.oracle.com (TNS V1-V3)         5065e891  
BISAMPLE nqsserver@demo.us.oracle.com (TNS V1-V3)         2b6148b2  
BISAMPLE nqsserver@demo.us.oracle.com (TNS V1-V3)  
BISAMPLE nqsserver@demo.us.oracle.com (TNS V1-V3)         8802f14e  
BISAMPLE nqsserver@demo.us.oracle.com (TNS V1-V3)         206c8d54  
BISAMPLE nqsserver@demo.us.oracle.com (TNS V1-V3)         c1c121a7

The ACTION is also available in many EM screens such as this one:

sa50210
Now with OBIEE 11.1.1.9 the physical hash – which was previously only available in the nqquery.log file – is available in S_NQ_DB_ACCT which can in turn be joined to S_NQ_ACCT to find out the logical request related to the physical query seen on the database. Cool huh!

SELECT PHYSICAL_HASH_ID,  
       USER_NAME,  
       SAW_SRC_PATH,  
       SAW_DASHBOARD,  
       SAW_DASHBOARD_PG  
FROM   BIEE_BIPLATFORM.S_NQ_DB_ACCT PHYS  
       INNER JOIN BIEE_BIPLATFORM.S_NQ_ACCT LOGL  
               ON LOGL.ID = PHYS.LOGICAL_QUERY_ID  
WHERE  PHYS.PHYSICAL_HASH_ID = '5065e891'

sa50207

This can be extended even further to associate AWR workload reports with specific OBIEE requests:

sa50209

One little grumble (no pleasing some people…) – it would have been nice if Usage Tracking also stored:

  • Timings at millisecond precision as well
  • The number of bytes (rather than just row count)
  • A proper TIMESTAMP WITH TIME ZONE (rather than the weird triplet of TS/DT/HOUR_MIN)
  • “Total time in BI Server”

Who knows, maybe in 12c?…

Footnote – START_TS in Usage Tracking in 11.1.1.9

As a note for others who may hit this issue, my testing has shown that Usage Tracking in 11.1.1.9 appears to have introduced a bug with START_TS (on both S_NQ_ACCT and S_NQ_DB_ACCT), in that it stores only the date, not date + time as it did in previous versions. For example:

  • 11.1.1.7:
    SELECT TO_CHAR(START_TS, 'YYYY-MM-DD HH24:MI:SS') AS START_TS, 
           TO_CHAR(START_DT, 'YYYY-MM-DD HH24:MI:SS') AS START_DT, 
           START_HOUR_MIN 
    FROM   S_NQ_ACCT 
    WHERE  ROWNUM < 2 
    
    START_TS            START_DT            START_HOUR_MIN   
    ------------------- ------------------- -----  
    2015-03-19 15:32:23 2015-03-19 00:00:00 15:32
  • 11.1.1.9:
    SELECT TO_CHAR(START_TS, 'YYYY-MM-DD HH24:MI:SS') AS START_TS, 
           TO_CHAR(START_DT, 'YYYY-MM-DD HH24:MI:SS') AS START_DT, 
           START_HOUR_MIN 
    FROM   S_NQ_ACCT 
    WHERE  ROWNUM < 2 
    
    START_TS            START_DT            START_HOUR_MIN   
    ------------------- ------------------- -----  
    2015-01-27 00:00:00 2015-01-27 00:00:00 10:41
Initialisation Block information in Usage Tracking

A new table, S_NQ_INITBLOCK, has been added to BIPLATFORM and holds details of when an init block ran, for which user, and importantly, how long it took. From a performance analysis point of view this is really valuable data and it’s good to seeing it being added to the diagnostic data captured to database with Usage Tracking.

From a glance at the data it looks like there’s a bit of a bonus logging going on, with user sign in/sign out also recorded (“SIGNNING ON/SIGNED ON/SIGNED OFF”).

2015-05-13_22-56-30

Note that there is no MBean for Init Block Usage Tracking, so regardless of how you configure the rest of Usage Tracking, you need to go to NQSConfig.ini to enable this one.

Presentation Services Cursor Cache

Oracle have added some additional Administration functionality for viewing and managing sessions and the cursor cache in Presentation Services. These let you track and trace more precisely user sessions.

From the Administration Page in OBIEE the new options are:


  1. Set dynamic log level per session from manage sessions

  2. Filter cursor cache based on specific user sessions

  3. Change sort order of cursor cache

  4. Show Presentation Services diagnostics per cursor

  5. Download cursor cache list as CSV

Some of these are somewhat low-level and will not be used day-to-day, but the general move towards a more open diagnostics interface with OBIEE is really positive and I hope we see more of it in 12c… :-)

Command Line Aggregate Advisor

Only for use by those with an Exalytics licence, the Summary Advisor was previously available in the Windows Administration Tool only but can now be run from the command line:

[oracle@demo setup]$ nqaggradvisor -h

Usage:  
    nQAggrAdvisor -d <dataSource> -u <userName> -o <outputFile> -c <tupleInQuotes>  
                  [-p <password>] [-F <factFilter>] [-z <maxSizeAggr>] [-g <gainThreshold>]  
                  [-l <minQueryTime>] [-t <timeoutMinutes>] [-s <startDate>]  
                  [-e <endDate>] [-C <on/off>] [-M <on/off>] [-K <on/off>]

Options:  
    -d      : Data source name  
    -u      : User name  
    -o      : Output aggregate persistence script file name  
    -c      : Aggregate persistence target - tuple in quotes: Fully qualified Connection pool, fully qualified schema name, capacity in MB  
    -p      : Password  
    -F      : Fact filter file name  
    -z      : Max size of any single aggregate in MB  
    -g      : Summary advisor will run until performance improvement for new aggregates drops below this value, default = 1  
    -l      : The minimum amount of query time accumulated per LTS in seconds, before it is included for analysis, default = 0  
    -t      : Max run time in minutes - 0 for unlimited, default = 0  
    -s      : Statistics start date  
    -e      : Statistics end date  
    -C      : Prefer optimizer estimates - on/off, default = off  
    -M      : Only include measures used in queries - on/off, default = off  
    -K      : Use surrogate keys - on/off, default = on

Examples:  
    nQAggrAdvisor -d "AnalyticsWeb" -u "Administrator" -p "ADMIN" -o "C:\temp\aggr_advisor.out.txt"  
        -c "DW_Aggr"."Connection Pool","DW_Aggr".."AGGR",1000

    nQAggrAdvisor -d "AnalyticsWeb" -u "Administrator" -p "ADMIN" -o "C:\temp\aggr_advisor.out.txt" -F "C:\temp\fact_filter.txt" -g 10  
        -c "TimesTen_instance1"."Connection Pool","dbo",2000 -s "2011-05-02 08:00:00" -e "2011-05-07 18:30:00"  -C on -M on -K off

Note that in the BIPLATFORM schema S_NQ_SUMMARY_STATISTICS is now called S_NQ_SUMMARY_ADVISOR.

HTML5 images

In previous versions of OBIEE graph images were rendered in Flash by default, and PNG on mobile devices. You could force it to use PNG for all images but would loose the interactivity (tooltips etc). Now in OBIEE 11.1.1.9 you can change the default from Flash to HTML5. This removes the need for a Flash plugin and is generally the way that a lot of visualisations are done on the web nowadays. To my eye there’s no difference in appearance:


To use HTML5 graphs by default, edit instanceconfig.xml and under <Charts> section add:

<DefaultWebImageType>html5</DefaultWebImageType>

Note that html5 is case-sensitive. The config file should look something like this:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>  
<WebConfig xmlns="oracle.bi.presentation.services/config/v1.1">  
   <ServerInstance>  
   [...]  
        <Views>  
        [...]  
            <Charts>  
                <DefaultWebImageType>html5</DefaultWebImageType>  
            [...]  
            </Charts>  
        [...]  
        </Views>  
    [...]  
   </ServerInstance>  
</WebConfig>

If Presentation Services doesn’t come back up when you restart it after making this change then check the stdout logfile console~coreapplication_obips1~1.log as well as the standard sawlog.log file, both of which you’ll find in $FMW_HOME/instances/instance1/diagnostics/logs/OracleBIPresentationServicesComponent/. The reason to check the console log file is that Presentation Services will refuse to start if the configuration supplied is invalid, and you’ll see an error message stating this here.

NQS ODBC functions

One for the Neos amongst you, a quick call of NQSGetSQLProcedures (as seen in SampleApp dashboard 7.90 NQS ODBC Procedures) and comparison with 11.1.1.7.150120 shows the following new & changed NQS ODBC calls. If this means nothing to you then it probably doesn’t need to, but if you’re interested in exploiting OBIEE functionality from all angles, documented or not, then these might be of interest. It goes without saying, these are entirely undocumented and unsupported, completely liable to change or be removed at any time by Oracle.

  • Added
    • NQSGetUserDefinedFunctions
    • NQSPAFIntegration
    • NQSSearchPresentationObjects
    • NQS_GetAllCacheEntries
    • NQS_GetOverallCacheInfo
    • NQS_GetRepositories
    • NQS_LoadNewBaseRP
    • NQS_LoadNewRPVersion
    • NQS_LockSessionAgainstAutoRPSwitchOver
    • NQS_SetRPDReadOnlyMode
    • NQS_SwitchOverThisSessionToNewRP
    • SAPurgeCacheBySubjectArea
    • SAPurgeCacheEntryByIDVector
    • SAPurgeXSACache
    • SASeedXSACache
  • Modified
    • NQSGetQueryLogExcerpt (additional parameter)
    • SAPurgeInternalCache (additional enum)
  • Removed
    • NQSChangeSelfPassword
Web Services

Web Services are one of the best ways to integrate with OBIEE programatically. You don’t need to be building heavy java apps just to use them – you can create and send the necessary SOAP messages from python or even just send it from bash with curl.

There are 2.5 new WSDLs – two new ones (v9, v10) plus v8 which has changed. The new services are:

  • KPIAssessmentService
  • ScorecardAssessmentService
  • ScorecardMetadataService
  • UserPersonalizationService

You’ll find documentation for the Web Services in the Integrator’s Guide.

User Image Upload

Users can now upload their own images for use in Title views, conditional formats, etc. From an administration point of view this means you’ll want to be keeping an eye on /root/shared/custom/images/ in the Presentation Catalog, either on disk and/or through the OBIEE Catalog View, switch to Admin and enable “Show Hidden Items”:

QUERY_LIMIT_WARNING_INSTEAD_OF_ERROR

This new setting in NQSConfig.ini will warn users when they’re breaching defined query limits, but it won’t abort the query.

Pointless hacks

If you’re a geek like me, part of the fun of a new tool is simply poking around and seeing what’s new – not necessarily what’s useful. There’s plenty of great new stuff in 11.1.1.9, but let’s take a look at the “under the hood”, just Because Geek.

It was John Minkjan who first blogged several years about the xsd configuration schema files, and it is from this that we can find all the things that Presentation Services might be able to do – not just what it definitely can do, and not just what Oracle have documented that it can do. I wrote about some of these options a while back, and there are a few new ones in 11.1.1.9.

ALL OF THESE ARE COMPLETELY UNDOCUMENTED AND UNSUPPORTED. DO NOT USE THEM.

  • EnableCloudBIEEHome sets the home page of OBIEE to be as it would be on BI Cloud Service (BICS). This is completely pointless since all the interesting stuff (Load Data, Model, Manage) is non-existent, even if it does give us a clue which application deployments are going to be supplying them (bimodeler and biserviceadministration respectively)

  • GridViews/ShowDataModels outputs a bunch of debug data in Answers Table or Pivot Views:

  • VirusScannerConfiguration – When a user uploads a custom image, this command will be called with it. For example, this simple script writes to a file the time and name of the file passed to it:

    echo '---' >> /tmp/log.txt  
    date >> /tmp/log.txt  
    echo $1 >> /tmp/log.txt

    If I save this as /tmp/test-script.sh and add it to instanceconfig.xml:

    <VirusScannerConfiguration>  
       <ScannerInvocationCommandLine>/tmp/test-script.sh</ScannerInvocationCommandLine>  
    </VirusScannerConfiguration>

    When I upload an image I get a row written to my log file. That in itself isn’t useful, but it could be a handy hook maybe from an auditing point of view, or indeed, virus scanning:

    [oracle@demo tmp]$ cat /tmp/log.txt  
    ---  
    Wed May 20 16:01:47 EDT 2015  
    /app/oracle/biee/instances/instance1/tmp/OracleBIPresentationServicesComponent/coreapplication_obips1/defaultpool/sawserver_8673_5553759a_2-1.tmp
Categories: BI & Warehousing

Oracle Priority Support Infogram for 14-MAY-2015

Oracle Infogram - Thu, 2015-05-14 11:14

SQL Developer
SQL Developer Data Modeler, Pending Changes & Versioning, from that JEFF SMITH.
Exalogic
From the Oracle Exalogicblog: Resizing vCPU and Memory Resources Allocated to Exalogic Guest vServers
From Exadata Partner Community – EMEA: Capgemini whitepaper: Oracle Exalogic-based Big Data strategy
ZFS
From Jim Kremer’s Blog: Oracle ZFS Storage Appliance RESTful API Examples
Oracle Virtual Compute Appliance
Oracle Virtual Compute Appliance backup white paper, from Virtually All The Time.
Oracle Utilities
From The Shorten Spot (@theshortenspot): Using JVMD with Oracle Utilities Applications - Part 1 Online
Java
From The Java Source: Reactive Java EE
BI
Oracle Business Intelligence Enterprise Edition 11.1.1.9.0, from Business Analytics - Proactive Support.
SOA
From the SOA & BPM Partner Community Blog: Additional new content SOA & BPM Partner Community
WebCenter
From Proactive Support - WebCenter Content: WebCenter 11.1.1.9.0 is Here!
ICS
From the Oracle Partner Hub: ISV Migration Center Team: Webcast - Oracle Integration Cloud Services (ICS): Technical Overview
JSON
The new SQL/JSON Query operators (Part4: JSON_TABLE), from the JSON in the Oracle Database blog.
Demantra
From the Oracle Demantra blog: TABLE_REORG Causing ORA-42012: error occurred while completing the redefinition and ORA-00600
EBS
From the Oracle E-Business Suite Support blog:
New and improved Order Management Analyzer!
Webcast: Utilizing API For Pick Release And Ship Confirm
Webcast: Basic Troubleshooting Information for Duplicate Sales Order Transactions Stuck in Inventory Tables
Data Solutions For Your Inventory Items
Webcast: Understanding the Output of the Discrete Job Value Report
Webcast: Demantra Certification. Are you attempting to get certified? Let's walk through the process!
From the Oracle E-Business Suite Technology blog:
EBS 12.2: New Getting Started Flowchart and Go-Live Readiness Checklist

EBS 12.1 Certified on Oracle Linux 7 and Red Hat Enterprise Linux 7

fsfreeze in Linux

Pythian Group - Thu, 2015-05-14 10:17

The fsfreeze command, is used to suspend and resume access to a file system. This allows consistent snapshots to be taken of the filesystem. fsfreeze supports Ext3/4, ReiserFS, JFS and XFS.

A filesystem can be frozen using following command:

# /sbin/fsfreeze -f /data

Now if you are writing to this filesystem, the process/command will be stuck. For example, following command will be stuck in D (UNINTERUPTEBLE_SLEEP) state:

# echo “testing” > /data/file

Only after the filesystem is unfreezed using the following command, can it continue:

# /sbin/fsfreeze -u /data

As per the fsfreeze main page, “fsfreeze is unnecessary for device-mapper devices. The device-mapper (and LVM) automatically freezes filesystem on the device when a snapshot creation is requested.”

fsfreeze is provided by the util-linux package in RHEL systems. Along with userspace support, fsfreeze also requires kernel support.

For example, in the following case, fsfreeze was used in the ext4 filesystem of an AWS CentOS node:

# fsfreeze -f /mysql
fsfreeze: /mysql: freeze failed: Operation not supported

From strace we found that ioctl is returning EOPNOTSUPP:

fstat(3, {st_dev=makedev(253, 0), st_ino=2, st_mode=S_IFDIR|0755,
st_nlink=4, st_uid=3076, st_gid=1119, st_blksize=4096, st_blocks=8,
st_size=4096, st_atime=2014/05/20-10:58:56,
st_mtime=2014/11/17-01:39:36, st_ctime=2014/11/17-01:39:36}) = 0
ioctl(3, 0xc0045877, 0) = -1 EOPNOTSUPP (Operation not
supported)

From latest upstream kernel source:

static int ioctl_fsfreeze(struct file *filp)
{
struct super_block *sb = file_inode(filp)->i_sb;if (!capable(CAP_SYS_ADMIN))
return -EPERM;

/* If filesystem doesn’t support freeze feature, return. */
if (sb->s_op->freeze_fs == NULL)
return -EOPNOTSUPP;

/* Freeze */
return freeze_super(sb);
}

EOPNOTSUPP is returned when a filesystem does not support the feature.

On testing to freeze ext4 in CentOs with AWS community AMI, fsfreeze worked fine.

This means that the issue was specific to the kernel of the system. It was found that AMI used to build the system was having a customized kernel without fsfreeze support.

Categories: DBA Blogs

About Those D2L Claims of LMS Usage Increasing Retention Rates

Michael Feldstein - Thu, 2015-05-14 09:43

By Phil HillMore Posts (316)

In my post last week on the IMS Global Consortium conference #LILI15, I suggested that LMS usage in aggregate has not improved academic performance and noted that John Baker from D2L disagreed.

John Baker from D2L disagreed on this subject, and he listed off internal data of 25% or more (I can’t remember detail) improved retention when clients “pick the right LMS”. John clarified after the panel the whole correlation / causation issue, but I’d love to see that data backing up this and other claims.

After the conference I did some checking based on prompts from some helpful readers, and I’m fairly certain that John’s comments referred to Lone Star College – University Park (LSC-UP) and its 24% increase in retention. D2L has been pushing this story recently, first in a blog post and then in a paid webinar hosted by Inside Higher Ed. From the blog post titled “Can an LMS improve retention?” [footnotes and emphasis in original]:

Can an LMS help schools go beyond simply managing learning to actually improving it?

Pioneering institutions like Lone Star College-University Park and Oral Roberts University are using the Brightspace platform to leverage learner performance data in ways that help guide instruction. Now, they’re able to provide students with more personalized opportunities to master content and build self-confidence. The results of their student-centered approach have been nothing short of amazing: For students coming in with zero credits, Lone Star estimates that persistence rates increased 19% between spring 2014 and fall 2014[3] and Oral Roberts University estimates a persistence rate of 75.5% for online programs, which is an all-time high.[4]

Then in the subsequent IHE webinar page [emphasis added]:

The results have been nothing short of amazing. Lone Star has experienced a 19% increase in persistence and Oral Roberts University has achieved a 75.5% persistence rate for online programs—an all-time high. Foundational to these impressive results is Brightspace by D2L—the world’s first Integrated Learning Platform (ILP)— which has moved far beyond the traditional LMS that, for years, has been focused on simply managing learning instead of improving it.

Then from page 68 of the webinar slides, as presented by LSC-UP president Shah Ardalan:

LSC Results 1

By partnering with D2L, using the nationally acclaimed ECPS, the Bill & Melinda Gates Foundation, and students who want to innovate, LSC-UP increased retention by 24% after the pilot of 2,000 students was complete.

ECPS and the Pilot

For now let’s ignore the difference between 19%, 24% and my mistake on 25%. I’d take any of those results as institutional evidence of (the right) LMS usage “moving the needle” and improving results[1]. This description of ECPS got my attention, so I did some more research on ECPS:

The Education and Career Positioning System is a suite of leading web and mobile applications that allow individuals to own, design, and create their education-to-career choices and pathways. The ability to own, design, and create a personal experience is accomplished by accessing, combining and aggregating lifelong personal info, educational records, career knowledge, and labor statistics …

I also called up the LSC-UP Invitation to Innovate program office to understand the pilot. ECPS is an advising and support system created by LCS-UP, and the pilot was partially funded by the Gates Foundation’s Integrated Planning and Advising Services (IPAS) program. The idea is that students do better by understanding their career choices and academic pathways up front rather than being faced with a broad set of options. LCS-UP integrated ECPS into a required course that all entering freshmen (not for transfers) take. Students used ECPS to identify their skills, explore careers, see what these careers would require, etc. LCS-UP made this ECPS usage a part of the entry course. While there is no published report, between Spring 2014 and Fall 2014 LCS-UP reports that increase in term-to-term persistence of 19+%. Quite interesting and encouraging, and kudos to everyone involved. You can find more background on ECPS here.

In the meantime, Lone Star College (the entire system of 92,000+ students) selected D2L and is now using Brightspace as its LMS; however, the ECPS pilot had little to do with LMS usage. The primary intervention was an advising system and course redesign to focus students on understanding career options and related academic pathways.

The Problem Is Marketing, Not Product

To be fair, what if D2L enabled LSC-UP to do the pilot in the first place by some unique platform or integration capabilities? There are two problems with this possible explanation:

  • ECPS follows IMS standards (LTI), meaning that any major LMS could have integrated with it; and
  • ECPS was not even integrated with D2L during the pilot.

That’s right – D2L is taking a program where there is no evidence that LMS usage was a primary intervention and using the results to market and strongly suggest that using their LMS can “help schools go beyond simply managing learning to actually improving it”. There is no evidence presented[2] of D2L’s LMS being “foundational” – it happened to be the LMS during the pilot that centered on ECPS usage.

I should be clear that D2L should rightly be proud of their selection as the Lone Star LMS, and from all appearances the usage of D2L is working for the school. At the very least, D2L is not getting in the way of successful pilots. It’s great to see D2L highlight the excellent work by LSC-UP and their ECPS application as they recently did in another D2L blog post extensively quoting Shah Ardalan:

Lone Star College-University Park’s incoming students are now leveraging ECPS to understand their future career path. This broadens the students’ view, allows them to share and discuss with family and friends, and takes their conversation with the academic and career advisors to a whole new level. “Data analytics and this form of ‘intentional advising’ has become part of our culture,” says Ardalan. “Because the students who really need our help aren’t necessarily the ones who call, this empowers them to make better decisions” he adds.

LSC-UP is also planning to starting using D2L’s analytics package Insights, and they may eventually get to the point where they can take credit for improving performance.

The problem is in misleading marketing. I say misleading because D2L and LSC-UP never come out and say “D2L usage increased retention”. They achieve their goal by clever marketing where the topic is whether D2L and their LMS can increase performance then they share the LSC success story. The reader or listener has to read the fine print or do additional research to understand the details, and most people will not do so.

The higher ed market deserves better.

I Maintain My Position From Conference Panel

After doing this research, I still back up my statement at the IMS panel and from my blog post.

I answered another question by saying that the LMS, with multiple billions invested over 17+ years, has not “moved the needle” on improving educational results. I see the value in providing a necessary academic infrastructure that can enable real gains in select programs or with new tools (e.g. adaptive software for remedial math, competency-based education for working adults), but the best the LMS itself can do is get out of the way – do its job quietly, freeing up faculty time, giving students anytime access to course materials and feedback. In aggregate, I have not seen real academic improvements directly tied to the LMS.

I’m still open to looking at programs that contradict my view, but the D2L claim from Lone Star doesn’t work.

  1. Although my comments refer to improvements in aggregate, going beyond pilots at individual schools, this claim would nonetheless be impressive.
  2. Evidence is based on blog posts, webinar, and articles as well as interview of LSC-UP staff; if D2L can produce evidence supporting their claim I will share it here.

The post About Those D2L Claims of LMS Usage Increasing Retention Rates appeared first on e-Literate.

APEX 5.0: There are issues with the configuration of the Static Files in your environment

Patrick Wolf - Thu, 2015-05-14 08:13
After installing Oracle APEX 5.0, do you get the alert “There are issues with the configuration of the Static Files in your environment. Please consult the “Configuring Static File Support” section in the Application Express Installation Guide.” when you try … Continue reading →
Categories: Development

OAM/WebGate troubleshooting : WebGate on Apache/OHS Unable to read the configuration file

Online Apps DBA - Thu, 2015-05-14 04:15
This post is from one of customer engagement where we implemeted and now support complete Oracle Identity & Access Management ( Contact Us If you are looking for Oracle Support or Implementation Partner) .  When you protect a resource on Oracle Access Manager (OAM) you configure WebGate on WebServer (OHS, Apache or IIS) acting as Policy Enforcement Point (PEP). In OAM […] The post OAM/WebGate troubleshooting : WebGate on Apache/OHS...

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

does impdp into a compressed table really compress data?

Yann Neuhaus - Thu, 2015-05-14 00:29

Today at a customer we discussed the following scenario: To refresh a test database a datapump export and import was implemented. To save space on the test system the idea came up to compress the data on the test system. When we checked the documentation we came across the following statement:

Notes on analytic technology, May 13, 2015

DBMS2 - Wed, 2015-05-13 20:38

1. There are multiple ways in which analytics is inherently modular. For example:

  • Business intelligence tools can reasonably be viewed as application development tools. But the “applications” may be developed one report at a time.
  • The point of a predictive modeling exercise may be to develop a single scoring function that is then integrated into a pre-existing operational application.
  • Conversely, a recommendation-driven website may be developed a few pages — and hence also a few recommendations — at a time.

Also, analytics is inherently iterative.

  • Everything I just called “modular” can reasonably be called “iterative” as well.
  • So can any work process of the nature “OK, we got an insight. Let’s pursue it and get more accuracy.”

If I’m right that analytics is or at least should be modular and iterative, it’s easy to see why people hate multi-year data warehouse creation projects. Perhaps it’s also easy to see why I like the idea of schema-on-need.

2. In 2011, I wrote, in the context of agile predictive analytics, that

… the “business analyst” role should be expanded beyond BI and planning to include lightweight predictive analytics as well.

I gather that a similar point is at the heart of Gartner’s new term citizen data scientist. I am told that the term resonates with at least some enterprises. 

3. Speaking of Gartner, Mark Beyer tweeted

In data management’s future “hybrid” becomes a useless term. Data management is mutable, location agnostic and services oriented.

I replied

And that’s why I launched DBMS2 a decade ago, for “DataBase Management System SERVICES”. :)

A post earlier this year offers a strong clue as to why Mark’s tweet was at least directionally correct: The best structures for writing data are the worst for query, and vice-versa.

4. The foregoing notwithstanding, I continue to believe that there’s a large place in the world for “full-stack” analytics. Of course, some stacks are fuller than others, with SaaS (Software as a Service) offerings probably being the only true complete-stack products.

5. Speaking of full-stack vendors, some of the thoughts in this post were sparked by a recent conversation with Platfora. Platfora, of course, is full-stack except for the Hadoop underneath. They’ve taken to saying “data lake” instead of Hadoop, because they believe:

  • It’s a more benefits-oriented than geek-oriented term.
  • It seems to be more popular than the roughly equivalent terms “data hub” or “data reservoir”.

6. Platfora is coy about metrics, but does boast of high growth, and had >100 employees earlier this year. However, they are refreshingly precise about competition, saying they primarily see four competitors — Tableau, SAS Visual Analytics, Datameer (“sometimes”), and Oracle Data Discovery (who they view as flatteringly imitative of them).

Platfora seems to have a classic BI “land-and-expand” kind of model, with initial installations commonly being a few servers and a few terabytes. Applications cited were the usual suspects — customer analytics, clickstream, and compliance/governance. But they do have some big customer/big database stories as well, including:

  • 100s of terabytes or more (but with a “lens” typically being 5 TB or less).
  • 4-5 customers who pressed them to break a previous cap of 2 billion discrete values.

7. Another full-stack vendor, ScalingData, has been renamed to Rocana, for “root cause analysis”. I’m hearing broader support for their ideas about BI/predictive modeling integration. For example, Platfora has something similar on its roadmap.

Related links

  • I did a kind of analytics overview last month, which had a whole lot of links in it. This post is meant to be additive to that one.
Categories: Other

Notes on analytic technology, May 13, 2015

Curt Monash - Wed, 2015-05-13 20:38

1. There are multiple ways in which analytics is inherently modular. For example:

  • Business intelligence tools can reasonably be viewed as application development tools. But the “applications” may be developed one report at a time.
  • The point of a predictive modeling exercise may be to develop a single scoring function that is then integrated into a pre-existing operational application.
  • Conversely, a recommendation-driven website may be developed a few pages — and hence also a few recommendations — at a time.

Also, analytics is inherently iterative.

  • Everything I just called “modular” can reasonably be called “iterative” as well.
  • So can any work process of the nature “OK, we got an insight. Let’s pursue it and get more accuracy.”

If I’m right that analytics is or at least should be modular and iterative, it’s easy to see why people hate multi-year data warehouse creation projects. Perhaps it’s also easy to see why I like the idea of schema-on-need.

2. In 2011, I wrote, in the context of agile predictive analytics, that

… the “business analyst” role should be expanded beyond BI and planning to include lightweight predictive analytics as well.

I gather that a similar point is at the heart of Gartner’s new term citizen data scientist. I am told that the term resonates with at least some enterprises. 

3. Speaking of Gartner, Mark Beyer tweeted

In data management’s future “hybrid” becomes a useless term. Data management is mutable, location agnostic and services oriented.

I replied

And that’s why I launched DBMS2 a decade ago, for “DataBase Management System SERVICES”. :)

A post earlier this year offers a strong clue as to why Mark’s tweet was at least directionally correct: The best structures for writing data are the worst for query, and vice-versa.

4. The foregoing notwithstanding, I continue to believe that there’s a large place in the world for “full-stack” analytics. Of course, some stacks are fuller than others, with SaaS (Software as a Service) offerings probably being the only true complete-stack products.

5. Speaking of full-stack vendors, some of the thoughts in this post were sparked by a recent conversation with Platfora. Platfora, of course, is full-stack except for the Hadoop underneath. They’ve taken to saying “data lake” instead of Hadoop, because they believe:

  • It’s a more benefits-oriented than geek-oriented term.
  • It seems to be more popular than the roughly equivalent terms “data hub” or “data reservoir”.

6. Platfora is coy about metrics, but does boast of high growth, and had >100 employees earlier this year. However, they are refreshingly precise about competition, saying they primarily see four competitors — Tableau, SAS Visual Analytics, Datameer (“sometimes”), and Oracle Data Discovery (who they view as flatteringly imitative of them).

Platfora seems to have a classic BI “land-and-expand” kind of model, with initial installations commonly being a few servers and a few terabytes. Applications cited were the usual suspects — customer analytics, clickstream, and compliance/governance. But they do have some big customer/big database stories as well, including:

  • 100s of terabytes or more (but with a “lens” typically being 5 TB or less).
  • 4-5 customers who pressed them to break a previous cap of 2 billion discrete values.

7. Another full-stack vendor, ScalingData, has been renamed to Rocana, for “root cause analysis”. I’m hearing broader support for their ideas about BI/predictive modeling integration. For example, Platfora has something similar on its roadmap.

Related links

  • I did a kind of analytics overview last month, which had a whole lot of links in it. This post is meant to be additive to that one.

<div dir="ltr" style="text-align: left;

Vikram Das - Wed, 2015-05-13 17:11
Jim pinged me with this error today:
on ./adgendbc.sh i get4:19 PMCreating the DBC file...java.sql.SQLRecoverableException: No more data to read from socket raised validating GUEST_USER_PWDjava.sql.SQLRecoverableException: No more data to read from socket4:19 PMUpdating Server Security Authenticationjava.sql.SQLException: Invalid number format for port numberDatabase connection to jdbc:oracle:thin:@host_name:port_number:database failed4:19 PMto this point, this is what i've tried.4:19 PMclean, autoconfid on db tier, autoconfig on cm same results4:20 PMbounced db and listener.. same thing.. nothing i've done has made a difference
I noticed that when this error was coming the DB alert log was showing:
Wed May 13 18:50:51 2015Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x8] [PC:0x10A2FFBC8, joet_create_root_thread_group()+136] [flags: 0x0, count: 1]Errors in file /evnapsd1/admin/diag/rdbms/evnapsd1/evnapsd1/trace/evnapsd1_ora_14528.trc  (incident=1002115):ORA-07445: exception encountered: core dump [joet_create_root_thread_group()+136] [SIGSEGV] [ADDR:0x8] [PC:0x10A2FFBC8] [Address not mapped to object] []Incident details in: /evnapsd1/admin/diag/rdbms/evnapsd1/evnapsd1/incident/incdir_1002115/evnapsd1_ora_14528_i1002115.trc
Metalink search revealed this article:
Java Stored Procedure Fails With ORA-03113 And ORA-07445[JOET_CREATE_ROOT_THREAD_GROUP()+145] (Doc ID 1995261.1)
It seems that the post patch steps for a PSU OJVM patch were not done.  We followed the steps given in above note were note completed. We completed these and adgendbc.sh completed successfully after that.

1.set the following init parameters so that JIT and job process do not start.

If spfile is used:

SQL> alter system set java_jit_enabled = FALSE;
SQL> alter system set "_system_trig_enabled"=FALSE;
SQL> alter system set JOB_QUEUE_PROCESSES=0;

2. Startup instance in restricted mode and run postinstallation step.

SQL> startup restrict

3.Run the postinstallation steps of OJVM PSU(Step 3.3.2 from readme)PostinstallationThe following steps load modified SQL files into the database. For an Oracle RAC environment, perform these steps on only one node.
  1. Install the SQL portion of the patch by running the following command. For an Oracle RAC environment, reload the packages on one of the nodes.
2. cd $ORACLE_HOME/sqlpatch/192820153. sqlplus /nolog4. SQL> CONNECT / AS SYSDBA5. SQL> @postinstall.sql
  1. After installing the SQL portion of the patch, some packages could become INVALID. This will get recompiled upon access or you can run utlrp.sql to get them back into a VALID state.
7. cd $ORACLE_HOME/rdbms/admin8. sqlplus /nolog9. SQL> CONNECT / AS SYSDBASQL> @utlrp.sql

4. Reset modified init parameters

SQL> alter system set java_jit_enabled = true;
SQL> alter system set "_system_trig_enabled"=TRUE;
SQL> alter system set JOB_QUEUE_PROCESSES=10;
        -- or original JOB_QUEUE_PROCESSES value

5.Restart instance as normal6.Now execute the Java stored procedure.

Ran adgendbc.sh and it worked fine.
Categories: APPS Blogs

Ingest a Single Table from Microsoft SQL Server Data into Hadoop

Pythian Group - Wed, 2015-05-13 15:13
Introduction

This blog describes the best-practice approach in regards to the data ingestion from SQL Server into Hadoop. The case scenario is described as under:

  • Single table ingestion (no joins)
  • No partitioning
  • Complete data ingestion (trash old and replace new)
  • Data stored in Parquet format
Pre-requisites

This example has been tested using the following versions:

  • Hadoop 2.5.0-cdh5.3.0
  • Hive 0.13.1-cdh5.3.0
  • Sqoop 1.4.5-cdh5.3.0
  • Oozie client build version: 4.0.0-cdh5.3.0
Process Flow Diagram process_flow1 Configuration
  • Create the following directory/file structure (one per data ingestion process). For a new ingestion program please adjust the directory/file names as per requirements. Make sure to replace the
    tag with your table name
<table_name>_ingest + hive-<table_name> create-schema.hql + oozie-properties <table_name>.properties + oozie-<table_name>-ingest + lib kite-data-core.jar
kite-data-mapreduce.jar
sqljdbc4.jar coordinator.xml
impala_metadata.sh
workflow.xml
  • The ingestion process is invoked using an oozie workflow. The workflow invokes all steps necessary for data ingestion including pre-processing, ingestion using sqoop and post-processing.
oozie-<table_name>-ingest
This directory stores all files that are required by the oozie workflow engine. These files should be stored in HDFS for proper functioning of oozie oozie-properties
This directory stores the <table_name>.properties. This file stores the oozie variables such as database users, name node details etc. used by the oozie process at runtime. hive-<table_name>
This directory stores a file called create-schema.hql  which contains the schema definition of the HIVE tables. This file is required to be run in HIVE only once.
  • Configure files under oozie-<table_name>-ingest
1.   Download kite-data-core.jar and kite-data-mapreduce.jar files from http://mvnrepository.com/artifact/org.kitesdk
2.  Download sqljdbc4.jar from https://msdn.microsoft.com/en-us/sqlserver/aa937724.aspx 3.  Configure coordinator.xml. Copy and paste the following XML. <coordinator-app name=”<table_name>-ingest-coordinator” frequency=”${freq}” start=”${startTime}” end=”${endTime}” timezone=”UTC” xmlns=”uri:oozie:coordinator:0.2″>
<action>
<workflow>
<app-path>${workflowRoot}/workflow.xml</app-path>
<configuration>
<property>
<name>partition_name</name>
<value>${coord:formatTime(coord:nominalTime(), ‘YYYY-MM-dd’)}</value>
</property>
</configuration>
</workflow>
</action>
</coordinator-app>

4.  Configure workflow.xml. This workflow has three actions:

a) mv-data-to-old – Deletes old data before refreshing new
b) sqoop-ingest-<table_name> – Sqoop action to fetch table from SQL Server
c) invalidate-impala-metadata – Revalidate Impala data after each refresh Copy and paste the following XML. <workflow-app name=”<table_name>-ingest” xmlns=”uri:oozie:workflow:0.2″><start to=”mv-data-to-old” /><action name=”mv-data-to-old”>
<fs>
<delete path=’${sqoop_directory}/<table_name>/*.parquet’ />
<delete path=’${sqoop_directory}/<table_name>/.metadata’ />
</fs><ok to=”sqoop-ingest-<table_name>”/>
<error to=”kill”/>
</action><action name=”sqoop-ingest-<table_name>”>
<sqoop xmlns=”uri:oozie:sqoop-action:0.3″>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path=”${nameNode}/user/${wf:user()}/_sqoop/*” />
</prepare><configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration><arg>import</arg>
<arg>–connect</arg>
<arg>${db_string}</arg>
<arg>–table</arg>
<arg>${db_table}</arg>
<arg>–columns</arg>
<arg>${db_columns}</arg>
<arg>–username</arg>
<arg>${db_username}</arg>
<arg>–password</arg>
<arg>${db_password}</arg>
<arg>–split-by</arg>
<arg>${db_table_pk}</arg>
<arg>–target-dir</arg>
<arg>${sqoop_directory}/<table_name></arg>
<arg>–as-parquetfile</arg>
<arg>–compress</arg>
<arg>–compression-codec</arg>
<arg>org.apache.hadoop.io.compress.SnappyCodec</arg>
</sqoop><ok to=”invalidate-impala-metadata”/>
<error to=”kill”/>
</action><action name=”invalidate-impala-metadata”>
<shell xmlns=”uri:oozie:shell-action:0.1″>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node><configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>${impalaFileName}</exec>
<file>${impalaFilePath}</file>
</shell>
<ok to=”fini”/>
<error to=”kill”/>
</action>
<kill name=”kill”>
<message>Workflow failed with error message ${wf:errorMessage(wf:lastErrorNode())}</message>
</kill><end name=”fini” /></workflow-app>

5. Configure impala_metadata.sh. This file will execute commands to revalidate impala metadata after each restore. Copy and paste the following data.

#!/bin/bash
export PYTHON_EGG_CACHE=./myeggs
impala-shell -i <hive_server> -q “invalidate metadata <hive_db_name>.<hive_table_name>”
  • Configure files under oozie-properties. Create file oozie.properties with contents as under. Edit the parameters as per requirements.
# Coordinator schedulings
freq=480
startTime=2015-04-28T14:00Z
endTime=2029-03-05T06:00Z jobTracker=<jobtracker>
nameNode=hdfs://<namenode>
queueName=<queue_name> rootDir=${nameNode}/user//oozie
workflowRoot=${rootDir}/<table_name>-ingest oozie.use.system.libpath=true
oozie.coord.application.path=${workflowRoot}/coordinator.xml # Sqoop settings
sqoop_directory=${nameNode}/data/sqoop # Hive/Impala Settings
hive_db_name=<hive_db_name>
impalaFileName=impala_metadata.sh
impalaFilePath=/user/oozie/<table_name>-ingest/impala_metadata.sh #impala_metadata.sh # MS SQL Server settings
db_string=jdbc:sqlserver://;databaseName=<sql_server_db_name>
db_username=<sql_server_username>
db_password=<sql_server_password>
db_table=<table_name>
db_columns=<columns>
  • Configure files under hive-<table_name>. Create a new file create-schema.hql with contents as under.
DROP TABLE IF EXISTS ;CREATE EXTERNAL TABLE ()
STORED AS PARQUET
LOCATION ‘hdfs:///data/sqoop/<table_name>'; Deployment
  • Create new directory in HDFS and copy files
$ hadoop fs -mkdir /user/<user>/oozie/<table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/lib /user/<user>/oozie/ <table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/ coordinator.xml /user/<user>/oozie/ <table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/ impala_metadata.sh /user/<user>/oozie/<table_name>-ingest
$ hadoop fs -copyFromLocal <directory>/<table_name>/oozie-<table_name>-ingest/ workflow.xml /user/<user>/oozie/ <table_name>-ingest
  • Create new directory in HDFS for storing data files
$ hadoop fs -mkdir /user/SA.HadoopPipeline/oozie/<table_name>-ingest
$ hadoop fs -mkdir /data/sqoop/<table_name>
  • Now we are ready to select data in HIVE. Go to URL http://<hive_server>:8888/beeswax/#query.
a. Choose existing database on left or create new.
b. Paste contents of create-schema.hql in Query window and click Execute.
c. You should now have an external table in HIVE pointing to data in hdfs://<namenode>/data/sqoop/<table_name>
  • Create Oozie job
a. Choose existing database on left or create new.
$ oozie job -run -config /home/<user>/<<directory>/<table_name>/oozie-properties/oozie.properties Validation and Error Handling
  • At this point an oozie job should be created. To validate the oozie job creation open URL http://<hue_server>:8888/oozie/list_oozie_coordinators. Expected output as under. In case of error please review the logs for recent runs.
 oozie1
  • To validate the oozie job is running open URL http://<hue_server>:8888/oozie/list_oozie_workflows/ . Expected output as under. In case of error please review the logs for recent runs.
 oozie2
  • To validate data in HDFS execute the following command. You should see a file with *.metadata extension and a number of files with *.parquet extension.
$ hadoop fs -ls /data/sqoop/<table_name>/
  • Now we are ready to select data in HIVE or Impala.
    For HIVE go to URL http://<hue_server>:8888/beeswax/#query
    For Impala go to URL http://<hue_server>:8888/impala
    Choose the newly created database on left and execute the following SQL – select * from <hive_table_name> limit 10
    You should see the the data being outputted from the newly ingested data.
Categories: DBA Blogs

The Ping of Mild Annoyance Attack and other Linux Adventures

The Anti-Kyte - Wed, 2015-05-13 14:46

Sometimes, it’s the simple questions that are the most difficult to answer.
For example, how many votes does it take to get an MP elected to the UK Parliament ?
The answer actually ranges from around 20,000 to several million depending on which party said MP is standing for.
Yes, our singular electoral system has had another outing. As usual, one of the main parties has managed to win a majority of seats despite getting rather less than half of the votes cast ( in this case 37%).

Also, as has become traditional, they have claimed to have “a clear instruction from the British People”.
Whenever I hear this, can’t help feeling that the “instruction” is something along the lines of “don’t let the door hit you on the way out”.

Offering some respite from the mind-bending mathematics that is a UK General Election, I’ve recently had to ask a couple of – apparently – simple questions with regard to Linux…

How do I list the contents of a zip file on Linux ?

More precisely, how do I do this on the command line ?

Let’s start wit a couple of csv files. First questions.csv :

question number, text
1,How many pings before it gets annoying ?
2,Where am I ?
3,How late is my train ?
4,What's in the zip ?
5,Fancy a game of Patience ?

Now answers.csv :

answer number, answer
1,6
2,Try hostname
3,Somewhere between a bit and very
4,Depends what type of zip
5,No!

Now we add these into a zip archive :

zip wisdom.zip questions.csv answers.csv
  adding: questions.csv (deflated 21%)
  adding: answers.csv (deflated 10%)

If you now want to check the contents of wisdom.zip, rather than finding the appropriate switch for the zip command, you actually need to use unzip….

unzip -l wisdom.zip
Archive:  wisdom.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
      156  04-29-2015 19:21   questions.csv
      109  04-29-2015 19:23   answers.csv
---------                     -------
      265                     2 files

If you want to go further and actually view the contents of one of the files in the zip….

unzip -c wisdom.zip answers.csv
Archive:  wisdom.zip
  inflating: answers.csv             
answer number, answer
1,6
2,Try hostname
3,Somewhere between a bit and very
4,Depends what type of zip
5,No!
The thing about PING

Say you have a script that checks that another server on the network is available, as a prelude to transferring files to it.
On Solaris, it may well do this via the simple expedient of…

ping

Now, whilst ping has been around for decades and is implemented on all major operating systems, the implementations differ in certain subtle ways.
Running it with no arguments on Solaris will simply issue a single ping to check if the target machine is up.
On Windows, it will attempt to send and recieve 4 packets and report the round-trip time for each.
On Linux however….

ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.032 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.087 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.088 ms
64 bytes from localhost (127.0.0.1): icmp_seq=4 ttl=64 time=0.098 ms
64 bytes from localhost (127.0.0.1): icmp_seq=5 ttl=64 time=0.096 ms
64 bytes from localhost (127.0.0.1): icmp_seq=6 ttl=64 time=0.097 ms
64 bytes from localhost (127.0.0.1): icmp_seq=7 ttl=64 time=0.095 ms
64 bytes from localhost (127.0.0.1): icmp_seq=8 ttl=64 time=0.099 ms
64 bytes from localhost (127.0.0.1): icmp_seq=9 ttl=64 time=0.096 ms
64 bytes from localhost (127.0.0.1): icmp_seq=10 ttl=64 time=0.100 ms
64 bytes from localhost (127.0.0.1): icmp_seq=11 ttl=64 time=0.066 ms
^C
--- localhost ping statistics ---
11 packets transmitted, 11 received, 0% packet loss, time 9997ms
rtt min/avg/max/mdev = 0.032/0.086/0.100/0.022 ms

Yep, it’ll just keep going until you cancel it.

If you want to avoid initiating what could be considered a very half-hearted Denial of Service attack on your own server, then it’s worth remembering that you can specify the number of packets that ping will send.
So…

ping -c1 localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.079 ms

--- localhost ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms

…is probably more what you’re after. This will exit with 0 if the target is up, as can be demonstrated using the script below (called you_up.sh)…

#!/bin/sh
ping -c1 localhost >/dev/null
if [ $? -ne 0 ]; then
    echo 'Something has gone horribly wrong'
else
    echo 'All OK'
fi
exit 0

Run this and we get…

sh you_up.sh
All OK

The long-suffering British electorate isn’t getting too much of a break. We now have the prospect of a Referendum on the UK’s EU membership to look forward to. On the plus side, it should be a bit easier to work out which side wins.


Filed under: Linux, Shell Scripting Tagged: ping -c, reading contents of a zip archive, specify number of packets to send using ping, unzip -c, unzip -l, zip

Simple C program for testing disk performance

Bobby Durrett's DBA Blog - Wed, 2015-05-13 13:48

I dug up a simple C program that I wrote years ago to test disk performance.  I hesitated to publish it because it is rough and limited in scope and other more capable tools exist. But, I have made good use of it so why not share it with others?  It takes a file name and the size of the file in megabytes.  It sequentially writes the file in 64 kilobyte chunks.  It opens the file in synchronous mode so it must write the data to disk before returning to the program. It outputs the rate in bytes/second that the program wrote to disk.

Here is a zip of the code: zip

There is no error checking so if you put in an invalid file name you get no message.

Here is how I ran it in my HP-UX and Linux performance comparison tests:

HP-UX:

$ time ./createfile /var/opt/oracle/db01/bobby/test 1024
Bytes per second written = 107374182

real 0m10.36s
user 0m0.01s
sys 0m1.79s

Linux:

$ time ./createfile /oracle/db01/bobby/test 1024
Bytes per second written = 23860929

real 0m45.166s
user 0m0.011s
sys 0m2.472s

It makes me think that my Linux system’s write I/O is slower.  I found a set of arguments to the utility dd that seems to do the same thing on Linux:

$ dd if=/dev/zero bs=65536 count=16384 of=test oflag=dsync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 38.423 s, 27.9 MB/s

But I couldn’t find an option like dsync on the HP-UX version of dd.  In any case, it was nice to have the C code so I could experiment with various options to open().  I used tusc on hp-ux and strace on Linux and found the open options to some activity in the system tablespace.  By grepping for open I found the options Oracle uses:

hp trace

open("/var/opt/oracle/db01/HPDB/dbf/system01.dbf", O_RDWR|0x800|O_DSYNC, 030) = 8

linux trace

open("/oracle/db01/LINUXDB/dbf/system01.dbf", O_RDWR|O_DSYNC) = 8

So, I modified my program to use the O_DSYNC flag and it was the same as using O_SYNC.  But, the point is that having a simple C program lets you change these options to open() directly.

I hope this program will be useful to others as it has to me.

– Bobby

p.s. Similar program for sequentially reading through file, but with 256 K buffers: zip

Categories: DBA Blogs