Skip navigation.

Feed aggregator

Do you encounter problems with JD Edwards 9.1 Standalone DEMO Installation?

JD Edwards 9.1 Standalone DEMO gives you an insight for the JD Edwards EnterpriseOne, an integrated applications suite of comprehensive enterprise resource planning software that combines business...

We share our skills to maximize your revenue!
Categories: DBA Blogs

My Weekend with Apple Watch

Oracle AppsLab - Mon, 2015-05-04 08:33

Editor’s note: Here’s our first Apple Watch review from Thao (@thaobnguyen), our head of research and design. Look for a companion review taken from Noel (@noelportugal) later this week.

Thanks, Apple! I got an Apple Watch (38mm) delivered on Friday, April 24.

uxboxing

The Watch packaging, quite different than the Watch Sport packaging.

Full disclosure, I’m a long time Apple user. My household runs on Apple so many would say I’m an Apple-fan girl. Even so, I’m amazed by the excitement Apple generates and surprised by my own excitement over the Watch.

I’ve used other smart watches for years, but why was I so eager to get my hands on an Apple Watch? Perhaps, it is was all the buzz about limited supply and high demand, and I could be among the “few” [thousands] to get the Watch first. Whatever the reason, I’m feel pretty lucky to be among the first customers that received an Apple Watch.

After spending the weekend with the watch, I would say I like it, but I’m not in love with it. It hasn’t reach the status of being invaluable. For now, I view it is a pretty awesome iPhone accessory and wearable. I feel constantly in touch and reachable – I don’t think I will ever miss a text message or email again.

glance

Many apps have been updated to support Apple Watch. The Watch apps have much simpler interaction than iPhone apps, which I’m starting to explore and get used to. Those that do it well, it feels cool to be able to do it on the watch. Those that don’t (such as not providing enough information), I’m sad I need to reach for my iPhone.

Lastly, Apple Watch consolidates features of my wearables into one so I am give up my other wearables for now. I wore a smart watch that was primarily a golf range finder (and I look forward to trying Apple Watch on the golf course) and a separate fitness tracker.

I will just be wearing the one Apple Watch now. I’m curious to see how my behavior and device usage pattern changes over time.

Will I become dependent upon and attached to Apple Watch as I am with my iPhone?

Finally, let me answer a few common questions I’ve received so far:

  • The watch does last all day. I start the day with 100% battery and end the day between 20-40% battery. However, my iPhone battery seems to be taking a hit.
  • Yes I can make and take a phone call on the watch. The sound quality is good and the mic is good. Caveat, I did not attempt to have a conversation in a loud setting like a busy restaurant.
  • No fat finger issues. The buttons and apps icons are seemingly small but I pretty much tap and select things on the watch without error.
  • Pairing between Apple Watch and iPhone was easy, and the range is good. I could be wearing the watch in one room of my house while the iPhone was in another room and had no problems with them being in range of each other.
  • Cool factor – surprisingly no! Only one person asked me if I was wearing an Apple Watch. Contrasted with other smart watches I have worn, where I would always be asked “what is that?” I’m guessing it is because the Apple Watch fits me and looks like a normal watch. It doesn’t draw attention as being oversized for my wrist.

home

Please leave comments as to your own use, or tips and tricks on getting the most out of smart watches.Possibly Related Posts:

Getting started with Sales Cloud (Updated)

Angelo Santagata - Mon, 2015-05-04 08:24
Hey all, Ive just reviesed the Getting Started with Oracle Sales Cloud Integrations blog entry with a few more links

EMC World 2015 - Day 1 at Momentum

Yann Neuhaus - Mon, 2015-05-04 08:07

The first day of my first EMC World conferences and specially the ones from Momentum wich covers the Enterprise Content Division (ECD) products/solutions/strategies aso. The start was great, being in Las Vegas where you have the feeling you are on another planet, I had the same feel during the General Session or the ECD Keynote; each time good explanations coupled with good shows.

The information I have got was interresting and some questions came in my mind. Questions that I hope can be answered in the next days.

InfoArchive

Before attending the General Session I went to another one which was about EMC InfoArchive. Today I work mainly with the Documentum Content Server and products around it like xPlore, ADTS, D2 aso.

To be prepared for new futur customer requests and challenges I wanted to see what is behind InfoArchive. Let's give some points:

- One main goal of using InfoArchive is to reduce the cost of the storage and to keep the assets.

- Once legacy applications are shut down, you can archive their data into InfoArchive. You can also use it to archive data from active applications where you can build some rules to define which data will be moved to InfoArchive. And this can be done for flat, complex as well as, of course, for document records.

- When the data are saved into InfoArchive, you can use xQuery, xForm to retrieve the data and display them in a way the user wants to see it.

That's on the general overview. On a technical point of view here some information:

- The Archive Service is build using a Data Service (xDB data server) and/or a Content Server. In case you have to archive only metadata the xDB service is sufficient.

- The storage to be used is obviously the EMC storages but other ones can also be used meaning this solution can be implemented in more type of infrastructures.

- To the question what is archived, the answer is SIP (Submission Information Package). You have a SIP descriptor  and SIP Data (metadata or/and Content)

- LWSO objects are stored to use less storage

- The search is done first against the AIP (Archive Info Packages) and once the object is found, against the AIU (Archive Info Unit).There is no fulltext available on the InfoArchive layer, the reason is that an archive system does not use it in general.

- RPS can be used to manage the retention.

Open questions

So that for the "facts", now there are some other open points which could be raised in case InforArchive will be used. You can save you data in normal XML formats but you can also define how the data are saved and how you want to search them. In this case who will manage that, the Record&Archive team or do you need first a business analyste? Can the defined model easily be changed for the current archived information? There are technical questions but I think the organization has first to be defined to have a successfull implementation of InfoArchive

Again, some questions are coming in my mind. And again, let's see if I can have some answers in ... the next days.

Get the main Administration Information from SQL Server with PowerShell

Yann Neuhaus - Mon, 2015-05-04 07:48

In my previous blog Automate SQL Server Administration with PowerShell – How to retrieve SQL Server services?, I presented you the first step of the SQL Server administration through an automated process.

This blog is a follow-up of the previous one, and it will focus on retrieving information of a SQL Server instance with PowerShell

 

Disclaimer: I am not a developer but a SQL Server dba. If you find errors or some ways of improvement, I will be glad to read your comments!

 

List all SQL Server instances

To be able to proceed for all the instances, you can easily get all your instance names with this function:

Get-SQLInstances_function_20150504-135411_1.png

 

Retrieve SQL Server instance information

In my example, I execute my script on the machine hosting my SQL Server instance.

I use SMO objects to access to the instance information . But you need the instance full name , as follows:

full_instance_name.png

I only give the instance name as parameter because I execute my script on a local server, otherwise I need to give the server name as parameter.

 

First I initialize my SMO object of my instance like this:

instance_smo_object.png

 

This SMO object contains SQL Server instance main information. To list all properties and the object methods, proceed as follows:

service_properties_and_methods_20150504-141841_1.png

 

To list the general information of  the instance, you can proceed like this:

instance_general_information.png

To list the directory paths related to the instance, here is an example:

instance_directories.png

To list important instance configuration, here is an example:

instance_configuration.png

 

By formating the information you retrieve in the instance SMO object, you can generate reports, audit your environment or whatever!

The following capture is an existing dashboard from our Database Management Kit (DMK).

dmk_instance_information.png

 

Next steps

The SMO object for the SQL Server instance has a limit number of properties and methods. Sometimes, you need information which are not present in the object. In this case, you must use the "sqlcmd" command and retrieve your information by using T-SQL.

Here is the way to proceed:

invoke_sqlcmd_command.png

To retrieve any of SQL Server instance information, the "sqlcmd" command would always work. You can also use it to modify the instance configuration.

 

I hope this blog will help you in your work. In my next blog, I will show you how to access to your database information with PowerShell.

APEX 5.0: Custom Favicon for Applications using Universal Theme

Patrick Wolf - Mon, 2015-05-04 06:08
For applications which are using Universal Theme you don’t have to modify the Page Template anymore if you want to replace the default favicon with a custom one. Instead follow these steps: Go to Shared Components Click Application Definition Attributes (in Application … Continue reading →
Categories: Development

I love Live Demos – how about you?

The Oracle Instructor - Mon, 2015-05-04 05:59

Tired of boring slide-shows? Join me for free to see Oracle core technology live in action!

Live demonstrations have always been a key part of my classes, because I consider them one of the best ways to teach.

This is your opportunity to have a glimpse into my classroom and watch a demo just as I have delivered it there.

Apparently, not many speakers are keen to do things live, so the term Demonar (Demonstration + Seminar) waited for me to be invented :-)

A positive effect towards your attitude about LVC and Oracle University Streams with its live webinars is intended, since the setting and platform is very similar there.


Categories: DBA Blogs

Get SQL Server services with PowerShell

Yann Neuhaus - Mon, 2015-05-04 02:13
 

SQL Server Configuration Manager and SQL Server Management Studio are the main tools to administrate the components of SQL Server. They are very convenient to use and pretty complete.
But as soon as you wish an automated process, these tools have their limitations. Nevertheless, there is still the solution: PowerShell!

This blog introduces a first step towards an automation process of SQL Server administration. I will retrieve all SQL Server services related to a specific instance name.

The process will always be similar by using the SMO WMI server PowerShell object.

 

Disclaimer: I am not a developer but a SQL Server dba. If you find errors or some ways of improvement, I will be glad to read your comments!

 

SQL Engine

To retrieve the SQL Engine service for a specific instance name:

Get-SQLEngine_function.png

  SQL Agent

To retrieve the SQL Agent service for a specific instance name:

Get-SQLAgent_function.png

  SQL Full-text Filter

To retrieve the SQL Full-text Filter service for a specific instance name:

Get-SQLFullTextFilter_function.png

  SQL Browser

To retrieve the SQL Browser service:

Get-SQLBrowser_function.png

  SQL Analysis

To retrieve the SQL Analysis service for a specific instance name:

Get-SQLAnalysis_functionpng.png

  SQL Reporting

To retrieve the SQL Reporting service for a specific instance name:

Get-SQLReporting_function.png

  SQL Integration

To retrieve the SQL Integration service:

Get-SQLIntegration_function.png

  Service Object

Each function returns an object with the following properties and methods:

service_properties_and_methods.png

You are able to start, restart or stop your service. But you can also retrieve specific information such as the Service Account, the Start Mode or the Service Account.

  Next Step

If you do not want to proceed just for a specific instance, but for all instances, you can list all instance names in that way:

Get-SQLInstances_function.png

Then, with your list of instance names, you loop by calling each function. Do not forget to test if the service returned exists (by testing if it is null).

 

To finish my article, all these functions are part of our Database Management Kit (DMK) developed by our team. We use it to access faster to common and standards information, but also to automate processes.

For example, the DMK is able (in just one command!) to make a security audit of your SQL Server environment, by following the best practices from Microsoft and from our experts. A report is generated at the end of the audit to list all the security points to review.

Webcast - Oracle Database Backup Service

As Oracle Continues the Oracle Cloud Expansion, it helps organizations more rapidly adopt and utilize hybrid cloud solutions, which can securely and seamlessly integrate public cloud solutions...

We share our skills to maximize your revenue!
Categories: DBA Blogs

theshortenspot on twitter!

Anthony Shorten - Sun, 2015-05-03 19:02

If you want to be kept up to date on when a new article is published I recommend that you use subscribe to the twitter account for theshortenspot. Anytime a new article is posted a new tweet is also added to announce the article (with links to the article).

The twitter account is https://twitter.com/theshortenspot. Use your favorite twitter client (or just the browser) to view the tweets...

Secure By Default in FW 4.3.0.0.1

Anthony Shorten - Sun, 2015-05-03 18:53

One of the new features of Oracle Utilities Application Framework V4.3.0.0.1 for Oracle WebLogic customers is that new installation of the product will be using HTTPS rather than HTTP by default. In past releases it was always possible to use HTTPS instead of HTTP but the decision was an opt-in decision. In this release since the use of HTTPS is provided as the default option, the decision is an opt-out if you do not want to use the HTTPS installation option.

Customers upgrading will not be affected as the configuration decision is retained across upgrades.

If you do use the default HTTPS setup you should be aware of the following:

  • By default, a demonstration development certification is provided with Oracle WebLogic. This certificate is limited in its scope and is only provided to complete a basic HTTPS configuration within Oracle WebLogic. The certificate will be detected as not valid by your browser. This is not a bug but intentional behavior as Oracle cannot issue production quality certificates in Oracle WebLogic as part of its base installation. If the default certificate is used, developers can accept the certificate according their browser preferences (Mozilla Firefox will ask you to add an exception and Internet Explorer will ask you to confirm that is ok to proceed). If you proceed the browser will indicate you are using a digital certificate visually on the address bar of the browser (this will vary from browser to browser).
  • It is HIGHLY recommended that customers who want to use the HTTPS functionality obtain a valid digital certificate from a valid certificate issuing authority and implement the certificate as per the Installation Guide or WebLogic documentation.
  • To find out the valid Certificate issuing authorities supported by the java version you have use the following command:
keytool -list -v -keystore $JAVA_HOME/jre/lib/security/cacerts

The bottom line is that if you want to use HTTPS then get a valid certificate for your organization, otherwise you can opt-out and use HTTP if that is valid for your site. Typically, most installations are expected to use HTTP for non-production and HTTPS for production to minimize costs.

Updated Whats New in FW4 whitepaper

Anthony Shorten - Sun, 2015-05-03 18:24

The What's New in FW4 whitepaper is a whitepaper that summarizes all the major changes from Oracle Utilities Application Framework V2.2 to Oracle Utilities Application Framework V4.3.0.0.1.

It has been updated for new functionality and changes implemented in Oracle Utilities Application Framework V4.3.0.0.1.

It is available from My Oracle Support at What's New In Oracle Utilities Application Framework V4 (Doc Id: 1177265.1).

Note: In earlier versions of Oracle Utilities Application Framework V4 some features have been introduced that have been replaced with newer features in Oracle Utilities Application Framework V4.3.0.0.1. In this case, the entries in the What's New have been altered to remove these replaced features. Refer to the release notes for the version of Oracle Utilities Application Framework for details of this replaced functionality.

SQLcl - Cloud connections via Secure Shell tunnels

Barry McGillin - Sun, 2015-05-03 17:39
We're always trying to make SQLcl easier to connect to your database, whether its at your place or in the cloud.  So, one other thing we have added to enable you to drill into your cloud databases is an SSHTUNNEL command.  Lets take a look at the help for it, which you can get as follows.

SQL> help sshtunnel
SSHTUNNEL
---------

Creates a tunnel using standard ssh options
such as port forwarding like option -L of the given port on the local host
will be forwarded to the given remote host and port on the remote side. It also supports
identity files, using the ssh -i option
If passwords are required, they will be prompted for.

SSHTUNNEL <username>@<hostname> -i <identity_file> [-L localPort:Remotehost:RemotePort]

Options

-L localPort:Remotehost:Remoteport

Specifies that the given port (localhost) on the local (client) host is to be forwarded to
the given remote host (Remotehost) and port (Remoteport) on the remote side. This works by
allocating a socket to listen to port on the local side.
Whenever a connection is made to this port, the connection is forwarded over
the secure channel, and a connection is made to remote host & remoteport from
the remote machine.

-i identity_file
Selects a file from which the identity (private key) for public key authentication is read.


SQL>


So for this to work we need to decide which ports locally we are going to use and which remote machine and port we want to use to map our ports from local to remote.  We also need a RSA file from the target host.  In this example, we have created one with the default name of id_rsa.  

The format of the flags follow the standard ssh rules and options, so -i for identity files and -L for port forwarding.  Heres an example connecting to a remote host via a tunnel.

(bamcgill@daedalus.local)–(0|ttys000|-bash)–(Mon May 04|12:16:46)
(~/.ssh) $sql /nolog

SQLcl: Release 4.1.0 Release Candidate on Mon May 04 00:16:58 2015

Copyright (c) 1982, 2015, Oracle. All rights reserved.


SQL> sshtunnel bamcgill@gbr30060.uk.oracle.com -i ./id_rsa -L 8888:gbr30060.uk.oracle.com:1521

Password for bamcgill@gbr30060.uk.oracle.com ********
ssh tunnel connected

SQL> connect barry/oracle@localhost:8888/DB11GR24
Connected

SQL> select 'test me' as BLRK from dual weirdtable

BLRK
-------
test me


SQL>


You can download SQLcl from OTN here and give this a try when the next EA is released.

Added a page about my LVC schedule

The Oracle Instructor - Sun, 2015-05-03 06:49

I get often asked by customers about my schedule, so they can book a class with me. This page now shows my scheduled Live Virtual Classes. I deliver most of my public classes in that format and you can attend from all over the world :-)


Categories: DBA Blogs

Oracle XE 11g – Getting APEX to start when your database does

The Anti-Kyte - Sun, 2015-05-03 03:53

They say patience is a virtue. It’s one that I often get to exercise, through no fault of my own.
Usually trains are involved. Well, I say involved, what I mean is…er…late.
I know, I do go on about trains. It’s a peculiarly British trait.
This may be because the highest train fares in Europe somehow don’t quite add up to the finest train service.
We can debate the benefits of British Trains later – let’s face it we’ll have plenty of time whilst we’re waiting for one to turn up. For now, I want to concentrate on avoiding any further drain on my badly tried patience by persuading APEX that it should be available as soon as my Oracle XE database is…

Oracle Express Edition – how it starts

There are three main components to Oracle XE :

  1. The Database
  2. The TNS Listener
  3. APEX

When you fire up Express Edition, it will start these components in this order :

  1. The Database
  2. The TNS Listener

APEX doesn’t get a look in at this point. Instead, when you first invoke it, it has to wait for the XDB database component to be initialized.

As I’ve observed previously, starting up the database before the listener can cause a lag if you’re trying to connect via TNS – i.e. from any machine other than the one the database is running on, or by specifying the database in the connect string.

The other problem is, of course, APEX will often refuse to play when you first call it after startup.

Often, your first attempt to get to the Database Home Page will be met with the rather unhelpful :

leaves on the line, or the wrong kind of snow ? Either way, APEX isn't talking to you

leaves on the line, or the wrong kind of snow ? Either way, APEX isn’t talking to you

It’s not until the TNS Listener is up and running that you’ll actually be able to connect to APEX.

In fact, it won’t be until you see the XEXDB service has been started by the Listener that you’ll be able to use APEX.
To check this :

lsnrctl status

The output should look something like this :

LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 02-MAY-2015 19:10:19

Copyright (c) 1991, 2011, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date                02-MAY-2015 18:25:19
Uptime                    0 days 0 hr. 44 min. 59 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Default Service           XE
Listener Parameter File   /u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/11.2.0/xe/log/diag/tnslsnr/mike-Monza-N2/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC_FOR_XE)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=mike-Monza-N2)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=mike-Monza-N2)(PORT=8080))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "XE" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
The command completed successfully

You can see what happens when you first call APEX by looking in the database alert log. If you want to see it in real-time, open a terminal and type :

tail -f /u01/app/oracle/diag/rdbms/xe/XE/trace/alert_XE.log

With the terminal window open and visible, click on the Getting Started Desktop icon (or simply invoke APEX directly from your browser). You should see this in the alert.log…

XDB installed.
XDB initialized.

So, the solution is :

  1. Start the Listener before starting the Database
  2. Get “APEX” to start directly after starting the Database

I’ve put APEX in quotes here because what we actually want to do is initialize the XDB component within the database.

Step 1 – changing the starting order

To do this, we’ll need to edit the standard startdb.sh script, after first making a backup copy, just in case …

sudo su oracle
cd /u01/app/oracle/product/11.2.0/xe/config/scripts
cp startdb.sh startdb.sh.bak
gedit startdb.sh

… The edited script should look something like this :

#!/bin/bash
#
#       svaggu 09/28/05 -  Creation
#	svaggu 11/09/05 -  dba groupd check is added
#

xsetroot -cursor_name watch
case $PATH in
    "") PATH=/bin:/usr/bin:/sbin:/etc
        export PATH ;;
esac

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
export ORACLE_SID=XE
LSNR=$ORACLE_HOME/bin/lsnrctl
SQLPLUS=$ORACLE_HOME/bin/sqlplus
LOG="$ORACLE_HOME_LISTNER/listener.log"
user=`/usr/bin/whoami`
group=`/usr/bin/groups $user | grep -i dba`

if test -z "$group"
then
	if [ -f /usr/bin/zenity ]
	then
		/usr/bin/zenity --error --text="$user must be in the DBA OS group to start the database." 
		exit 1
	elif [ -f /usr/bin/kdialog ]
	then
		/usr/bin/kdialog --error "$user must be in the DBA OS group to start the database."
		exit 1
	elif [ -f /usr/bin/xterm ]
	then
		/usr/bin/xterm -T "Error" -n "Error" -hold -e "echo $user must be in the DBA OS group to start the database."
		exit 1
	fi
else
    # Listener start moved to before database start to avoid lag in db
    # registering with listener after db startup
    # Mike
	if [ ! `ps -ef | grep tns | cut -f1 -d" " | grep -q oracle` ]
	then
		$LSNR start > /dev/null 2>&1
	else
		echo ""
	fi
# now start the database
	$SQLPLUS -s /nolog @$ORACLE_HOME/config/scripts/startdb.sql > /dev/null 2>&1
fi

xsetroot -cursor_name left_ptr

Now, when the database first starts and looks around for the Listener to register with, it’ll find it up and ready to go.

Step 2 – initialise XDB

Exactly how you do this properly has been a bit of a puzzle to me. I’m sure there is a proper way to do this, other than pointing your browser at APEX only for it to tell you to go away.
In lieu of this elusive “proper” XDB startup command, I’m going to use one that tells you what port the PL/SQL Gateway ( the default listener for APEX) is listening….

select dbms_xdb.gethtpport
from dual;

Something interesting happens when you run this command. The first time you execute it after database startup and when you haven’t invoked APEX, it takes quite a long time to return. If you look in the alert log you’ll see the reason for this…

XDB installed.
XDB initialized.

Yes, the same entries you see when you first try to invoke APEX.

So, we’re going to get this query to run as soon as the database is started. The easiest way to do this is to edit the startdb.sql script that’s called by the shell script we’ve just edited…

sudo su oracle
cd /u01/app/oracle/product/11.2.0/xe/config/scripts
cp startdb.sql startdb.sql.bak
gedit startdb.sql

Here, we’re simply adding this query directly the database is open…

connect / as sysdba
startup
-- added to start the PL/SQL Gateway so that APEX should be reachable
-- right after startup
select dbms_xdb.gethttpport from dual;
exit

Now, if we check the alert.log on startup of the database we’ll see something like…

QMNC started with pid=28, OS id=2469
Wed Apr 29 12:16:27 2015
Completed: ALTER DATABASE OPEN
Wed Apr 29 12:16:32 2015
db_recovery_file_dest_size of 10240 MB is 57.54% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Apr 29 12:16:32 2015
Starting background process CJQ0
Wed Apr 29 12:16:32 2015
CJQ0 started with pid=29, OS id=2483
Wed Apr 29 12:16:43 2015
XDB installed.
XDB initialized.

As soon as that last message is there, APEX is up and ready to receive requests.
On the one hand, it’s nice to know for definite when APEX will finally deign to answer your call, as opposed to hiding behind the PAGE NOT FOUND error and pretending to be out.
On the other hand, having to tail the alert.log to figure out when this is seems a bit like hard work.

Of course, in Linux land, you can always just prompt the shell script to announce when it’s finished…

Desktop Notification

I’m running this on a Gnome based desktop ( Cinnamon, if you’re interested, but it should work on anything derived from Gnome). KDE adherents will have their own, equally useful methods.
As in my previous attempt at this sort of thing, I’m going to use notify-send.

If you want to test if it’s installed, you can simply invoke it from the command line :

notify-send "Where's that train ?"

If all is OK, you should get this message displayed in a notification on screen…

Is it a bird ? Is it a train...

Is it a bird ? Is it a train…

Now we simply use this utility to add a message at the end of the database startup script.
We can even add an icon if we’re feeling flash….

#!/bin/bash
#
#       svaggu 09/28/05 -  Creation
#	svaggu 11/09/05 -  dba groupd check is added
#

xsetroot -cursor_name watch
case $PATH in
    "") PATH=/bin:/usr/bin:/sbin:/etc
        export PATH ;;
esac

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
export ORACLE_SID=XE
LSNR=$ORACLE_HOME/bin/lsnrctl
SQLPLUS=$ORACLE_HOME/bin/sqlplus
LOG="$ORACLE_HOME_LISTNER/listener.log"
user=`/usr/bin/whoami`
group=`/usr/bin/groups $user | grep -i dba`

if test -z "$group"
then
	if [ -f /usr/bin/zenity ]
	then
		/usr/bin/zenity --error --text="$user must be in the DBA OS group to start the database." 
		exit 1
	elif [ -f /usr/bin/kdialog ]
	then
		/usr/bin/kdialog --error "$user must be in the DBA OS group to start the database."
		exit 1
	elif [ -f /usr/bin/xterm ]
	then
		/usr/bin/xterm -T "Error" -n "Error" -hold -e "echo $user must be in the DBA OS group to start the database."
		exit 1
	fi
else
    # Listener start moved to before database start to avoid lag in db
    # registering with listener after db startup
    # Mike
	if [ ! `ps -ef | grep tns | cut -f1 -d" " | grep -q oracle` ]
	then
		$LSNR start > /dev/null 2>&1
	else
		echo ""
	fi
    # now start the database
	$SQLPLUS -s /nolog @$ORACLE_HOME/config/scripts/startdb.sql > /dev/null 2>&1
fi
#
# Publish desktop notification that we're ready to go...
#
notify-send -i /usr/share/pixmaps/oraclexe-startdatabase.png "Database and APEX ready to play" 

xsetroot -cursor_name left_ptr

When the script hits the notify line, we’re rewarded with…

db_ready

So, even if your train has stopped randomly in-between stations or is simply proving once again, that the timetable is a work of fiction, at least you won’t have to wonder if your database is ready for action.


Filed under: APEX, Oracle, Shell Scripting, SQL Tagged: alert.log, dbms_xdb.gethttpport, lsnrctl, notify-send, XDB

APEX You Tube Channel

Denes Kubicek - Sun, 2015-05-03 02:56
Great idea to organize the Oracle APEX YouTube Channel. David Peake has started a Video Series about the new Page Designer in APEX 5.0. You should have a look.

Categories: Development

Notes, links and comments, May 2, 2015

DBMS2 - Sat, 2015-05-02 08:36

I’m going to be out-of-sorts this week, due to a colonoscopy. (Between the prep, the procedure, and the recovery, that’s a multi-day disablement.) In the interim, here’s a collection of links, quick comments and the like.

1. Are you an engineer considering a start-up? This post is for you. It’s based on my long experience in and around such scenarios, and includes a section on “Deadly yet common mistakes”.

2. There seems to be a lot of confusion regarding the business model at my clients Databricks. Indeed, my own understanding of Databricks’ on-premises business has changed recently. There are no changes in my beliefs that:

  • Databricks does not directly license or support on-premises Spark users. Rather …
  • … it helps partner companies to do so, where:
    • Examples of partner companies include usual-suspect Hadoop distribution vendors, and DataStax.
    • “Help” commonly includes higher-level support.

However, I now get the impression that revenue from such relationships is a bigger deal to Databricks than I previously thought.

Databricks, by the way, has grown to >50 people.

3. DJ Patil and Ruslan Belkin apparently had a great session on lessons learned, covering a lot of ground. Many of the points are worth reading, but one in particular echoed something I’m hearing lots of places — “Data is super messy, and data cleanup will always be literally 80% of the work.” Actually, I’d replace the “always” by something like “very often”, and even that mainly for newish warehouses, data marts or datasets. But directionally the comment makes a whole lot of sense.

4. Of course, dirty data is a particular problem when the data is free-text.

5. In 2010 I wrote that the use of textual news information in investment algorithms had become “more common”. It’s become a bigger deal since. For example:

6. Sometimes a post here gets a comment thread so rich it’s worth doubling back to see what other folks added. I think the recent geek-out on indexes is one such case. Good stuff was added by multiple smart people.

7. Finally, I’ve been banging the drum for electronic health records for a long time, arguing that the great difficulties should be solved due to the great benefits of doing so. The Hacker News/New York Times combo offers a good recent discussion of the subject.

Categories: Other

Our new Oracle APEX YouTube Channel is up and running!

Patrick Wolf - Sat, 2015-05-02 07:12
Check out our new Oracle APEX YouTube Channel! Our Product Manager David Peake has started a Video Series about the new Page Designer in Oracle Application Express 5.0. It’s a great start to get familiarized with the new IDE to edit … Continue reading →
Categories: Development

getting started with postgres plus advanced server (3) - setting up a hot standby server

Yann Neuhaus - Sat, 2015-05-02 02:42

So, we have a ppas 94 database up and running and we have a backup server for backing up and restoring the database. Now it is time to additionally protect the database by setting up a hot standby database. This database could even be used to offload reporting functionality from the primary database as the standby database will be open in read only mode. Again, I'll use another system for that so that the system overview looks like this:

server ip address purpose ppas 192.168.56.243 ppas database cluster ppasbart 192.168.56.245 backup and recovery server ppasstandby 192.168.56.244 ppas hot standby database


As the standby database will need the ppas binaries just follow the first post for setting this up again. Once the binaries are installed and the database is up and running I'll completely destroy it but keep the data directory:

[root@oel7 tmp]# service ppas-9.4 stop
Stopping Postgres Plus Advanced Server 9.4: 
waiting for server to shut down.... done
server stopped
[root@oel7 tmp]# rm -rf /opt/PostgresPlus/9.4AS/data/*
[root@oel7 tmp]# 

Ready to go. It is amazingly easy to setup a hot standby server with postgres. In a nutshell, everything that needs to be done is to create a replication user in the database, do a base backup of the primary database, copy that to the standby server, create a recovery.conf file and startup the standby database. Lets start by creating the user which will be used for the recovery in the primary database:

[root@ppas ~]# su - enterprisedb
-bash-4.2$ . ./pgplus_env.sh 
-bash-4.2$ psql
psql.bin (9.4.1.3)
Type "help" for help.

edb=# edb=# create role standby LOGIN REPLICATION UNENCRYPTED PASSWORD 'standby';
CREATE ROLE
edb=# commit;
COMMIT
edb'# 

... and adjust the pg_hba.conf file (the second entry is for the base backup later):

-bash-4.2$ tail -1 data/pg_hba.conf
host    replication     standby         192.168.56.244/24          md5
local   replication     standby                                              md5

... and adjust the wal-level in postgresql.conf

-bash-4.2$ grep wal_level data/postgresql.conf 
wal_level = hot_standby			# minimal, archive, hot_standby, or logical

For the settings in pg_hba.conf and postgresql.conf to take effect either a reload of the main server process or a complete restart is required:

-bash-4.2$ pg_ctl -D data/ restart
waiting for server to shut down..... done
server stopped
server starting

Now it is a good time to test if we can connect to the primary database from the standby node:

[root@oel7 tmp]# /opt/PostgresPlus/9.4AS/bin/psql -h 192.168.56.243 -U standby edb
Password for user standby: 
psql.bin (9.4.1.3)
Type "help" for help.

edb=> 

Ready for the basebackup of the primary database?

mkdir /var/tmp/primary_base_backup/
-bash-4.2$ pg_basebackup -D /var/tmp/primary_base_backup/ -U standby -F t -R -x -z -l for_standby -P
Password: 
56517/56517 kB (100%), 1/1 tablespace
-bash-4.2$ 

Especially notice the "-R" switch of pg_basebackup as this creates a minimal recovery.conf for us which we can use as a template for our standby database. Transfer and extract the file written to the standby server (I again prepared passwordless ssh authentication between the primary and the standby server. check the second post on how to do that).

bash-4.2$ pwd
/opt/PostgresPlus/9.4AS/data
bash-4.2$ scp 192.168.56.243:/var/tmp/primary_base_backup/* .
base.tar.gz                                                                                                  100% 5864KB   5.7MB/s   00:00    
-bash-4.2$ 
-bash-4.2$ tar -axf base.tar.gz 
-bash-4.2$ ls
backup_label  dbms_pipe  pg_dynshmem    pg_log        pg_notify    pg_snapshots  pg_subtrans  PG_VERSION            postgresql.conf
base          global     pg_hba.conf    pg_logical    pg_replslot  pg_stat       pg_tblspc    pg_xlog               recovery.conf
base.tar.gz   pg_clog    pg_ident.conf  pg_multixact  pg_serial    pg_stat_tmp   pg_twophase  postgresql.auto.conf
-bash-4.2$ 

Almost ready. Now we need to adjust the recovery.conf file:

standby_mode = 'on'
primary_conninfo = 'host=192.168.56.243 port=5444 user=standby password=standby'
restore_command = 'scp bart@192.168.56.245:/opt/backup/ppas94/archived_wals/%f %p'

... and enable hot standby mode in the postgresql.conf file on the standby server and adjust the listen address:

-bash-4.2$ grep hot postgresql.conf 
wal_level = hot_standby			# minimal, archive, hot_standby, or logical
hot_standby = on			# "on" allows queries during recovery
#hot_standby_feedback = off		# send info from standby to prevent
-bash-4.2$ grep listen data/postgresql.conf
listen_addresses = '192.168.56.244'		# what IP address(es) to listen on;

Startup the standby database and if everything is fine messages similar to this should be reported in the postgresql log file (/opt/PostgresPlus/9.4AS/data/pg_log/):

2015-04-29 14:03:36 CEST LOG:  entering standby mode
scp: /opt/backup/ppas94/archived_wals/000000010000000000000017: No such file or directory
2015-04-29 14:03:36 CEST LOG:  consistent recovery state reached at 0/17000090
2015-04-29 14:03:36 CEST LOG:  redo starts at 0/17000090
2015-04-29 14:03:36 CEST LOG:  record with zero length at 0/170000C8
2015-04-29 14:03:36 CEST LOG:  database system is ready to accept read only connections
2015-04-29 14:03:36 CEST LOG:  started streaming WAL from primary at 0/17000000 on timeline 1

To further prove the setup lets create a simple table in the primary database and add some rows to it:

edb=# create table standby_test ( a int ); 
CREATE TABLE
edb=# insert into standby_test values (1);
INSERT 0 1
edb=# insert into standby_test values (2);
INSERT 0 1
edb=# commit;
COMMIT
edb=# \! hostname
ppas.local
edb=# 

Lets see if we can query the table on the standby:

-bash-4.2$ psql
psql.bin (9.4.1.3)
Type "help" for help.

edb=# select * from standby_test;
 a 
---
 1
 2
(2 rows)

edb=# \! hostname
ppasstandby.local
edb=# 

Cool. Minimal effort for getting a hot standby database up and running. Make yourself familiar with the various settings that influence the behavior of the standby database. I'll write another post on how to do failovers in near future.