Feed aggregator

Migrate a SQL Server environment with complex replication without reinitializing or rebuilding replication

Pythian Group - Mon, 2016-04-04 12:14

When you have a SQL Server environment where a very complex replication setup is in place, and you need to migrate/move (without upgrading), some or all the servers involved in the replication topology to new servers/Virtual Machines or to a new Data Center/Cloud, this Blog post is for you!

Let’s assume you also have Transactional and/or Merge publications and subscriptions in place, and you need to move the publisher(s) and/or distributor(s) to a new environment. You also have one or more of the following restrictions:

  • You are not sure if the schema at the subscribers is identical to the publisher (i.e.: different indexes, different columns, etc).
  • You cannot afford downtime to reinitialize the subscriber(s)
  • There are too many subscribers to reinitialize and you cannot afford the downtime if anything goes wrong.

Here are the general steps for this migration:
Prior the migration date:

  • New instance has to have same SQL Server version and edition plus patch level as old instance. Windows version and edition can be different but you need to ensure the version of Windows supports the version of SQL Server.
  • The directory structure for the SQL Server files should be identical in the new server as old server and same permissions:
    • Same path for SQL Server binaries
    • Same path and database files names in both servers for system databases
    • Same directories where user database files and T-logs reside
    • Same path for the replication directories (when applies)
  • Copy over any instance-level objects (Logins, Linked Servers and jobs) to new instance; leave jobs disabled if applies or stop SQL Server Agent on new server

On migration date:

  • Disable any jobs, backups and maintenance that should run during the migration window on old server
  • Stop all database activity on old instance or disable logins
  • Restart old instance and verify there is no activity
  • Synchronize all replication agents that are related to the server being migrated
  • Stop and disable replication agents related to the server being migrated
  • Stop both instances
  • Copy over all system database files from old to new server
  • Copy over all user database files from old server to new one
    • Alternatively, backup all user databases on old server before stopping service and copy the files to new server
  • Shutdown old server
  • Rename new server to the name of old server and change the IP of new server to old server’s IP
  • Start the new server
  • Verify that the name of the new instance is like the old server and it’s local
  • If you backed up the user databases previously, you need to restore them to same location and file names as in old server with RECOVERY and KEEP_REPLICATION
  • Verify that all user databases are online and publications + subscribers are there
  • Start all replication agents related to the migrated server and verify replication is working properly
  • Verify that applications are able to connect to the new instance (no need to modify instance name as it is the same as before and same IP)

At any case, it is strongly recommended to test the migration prior to the real cutover, even if the test environment is not identical to Production, just to get a feel for it. Ensure that you are including most replication scenarios you have in Production during your test phase.

The more scripts you have handy for the cutover date, the less downtime you may have.

It is extremely important to also have a good and tested rollback plan.

In future Blog posts I will discuss more complex replication scenarios to be migrated and rollback plans.

If you would like to make suggestions for future blogs, please feel free to add a comment and I will try to include your request in future posts.

Categories: DBA Blogs

WebCenter Sites and Sites Cloud Service Accessibility

WebCenter Team - Mon, 2016-04-04 11:53
Author: Mitchell Palski – Oracle Fusion Middleware Specialist

One of the major requirements for public facing websites is usually to conform to standard Web Accessibility Guidelines. Oracle demonstrates the adherence of our products by issuing Voluntary Product Assessment Templates (VPATs). Oracle uses the VPAT to represent the degree of conformance to various accessibility standards and guidelines, including Section 508 (as released in 2001), Web Content Accessibility Guidelines (WCAG) 1.0, and WCAG 2.0.

Oracle’s Web Content Management (WCM) offering – WebCenter Sites – is a leader in the category (according to Gartner’s Magic Quadrant for WCM). WebCenter Sites provides the tools for business users to build intuitive digital experiences for end-users that adapt to their behavior and dynamically deliver content based on that behavior. Each website is enabled with social tools, intelligent user-segmentation, native integration with document management and customer relationship management tools, and an out-of-the-box mobile-friendly interface. WebCenter Sites can be deployed on-premise or in the Cloud as Sites Cloud Service (SCS) – an option of Document Cloud Service.

When it comes to accessibility, what developers and integrators really want to know is – “Can I check off the box that says WebCenter Sites conforms to Section 508 and WCAG?”

Make the judgement for yourself:

What it boils down to is this:

  • WebCenter Sites and Sites Cloud Service conform to the Section 508 standards and the WCAG standards at the “AA” level by providing the toolsets to:
    • Facilitate the development of accessible web content and web sites
    • Enable the use of supporting technologies that certain people with disabilities use to access the web
    • Prevent instances with disadvantages users are unable to access content (or equivalent content)
  • “Admin UI” and “Builder UI” (for Sites on-premise and Sites Cloud Service, respectively) do not adhere to the same level of accessibility standards
    • This effects business users who are responsible for developing new assets (building pages, templates or workflows; defining profiles, user segments, etc.)
    • Certain work-arounds are able to be provided through product customization, preferably by an Oracle-certified partner
Were you worried about WebCenter Sites or Sites Cloud Service being able to service your disabled end-users? Well, you don’t have to worry anymore!

Four Smart Reasons To Use My Oracle Support Communities

Joshua Solomin - Mon, 2016-04-04 11:11

Increase productivity with access to on-line support communities, specific to your Oracle environment, backed by Oracle support experts. Collaborate with an extensive global peer group who share real-world best practices, tips and tricks.

At the heart of My Oracle Support Community (MOSC) are hundreds of engagement spaces that cover a wide range of technologies, applications, and business processes. But MOSC is much more than just a discussion forum.

The Reasons
--> What Why Targeted Product Areas: Community spaces are specific to Oracle products, increasing ease of access to relevant information.
  • Follow only the spaces that you work with to get the latest information. Get early access to product news and support features.
  • Post discussions with your product questions for Oracle experts to engage.
  • Submit and vote on Ideas to help steer product direction.
  • Use communities for quick answers in lieu of a Service Request.
Network and Collaborate: Enjoy direct collaboration with Oracle experts and customer peers.
  • My Oracle Support Community members are experts and peers from the Oracle customer base, with real-world experience; and Oracle’s own Support engineers moderate and contribute to the communities.
  • The result: unparalleled access to a combination of deep and practical expertise.
Extensive, Targeted Content: Filter out the noise and access content for your areas of interest.
  • You can "Follow" specific content, topics, or people and get notified in the community or by email of new interactions in the areas you choose to follow.
  • Knowledge is also shared using current features like Spotlights, Ideas, Instant Polls, Twitter Feeds and Support Blogs.
Always Evolving: Available 24 x 7, MOS Communities grows in value the more it gets used.
  • Community areas grow in usefulness and relevance as content is added and shared.
  • High-value content becomes more accessible and easier to find.
  • Share your own expertise and enhance your own reputation as an expert, as well as network with your peers.

Why You Should Consider Moving Your Enterprise Application to the Oracle Cloud

Pythian Group - Mon, 2016-04-04 10:32

 

If you’ve decided to migrate your Oracle enterprise applications to the public cloud, it’s a good idea to consider Oracle Cloud alongside alternatives such as Amazon Web Services (AWS) and Microsoft Azure.

Oracle has made big strides in the cloud lately with platform-as-a-service (PaaS) offerings for its middleware and database software, culminating in the release of its first infrastructure-as-a-service (IaaS) offering in late 2015.

Oracle has a clear advantage over the competition when it comes to running its own applications in the cloud: it has full control over product licensing and can optimize its cloud platform for lift-and-shift migrations. This gives you a low-risk strategy for modernizing your IT portfolio.

 

What to expect from Oracle Cloud IaaS

Because Oracle’s IaaS offering is quite new, it has yet to match the flexibility and feature set of Azure and AWS. For example, enterprise VPN connectivity between cloud and on-premises infrastructure is still very much a work in progress. Unlike AWS, however, Oracle provides a free software appliance for accessing cloud storage on-premises. In addition to offering an hourly metered service, Oracle also provides unmetered compute capacity with a monthly subscription. Some customers prefer this option because it allows them to more easily control their spending through a predictable monthly fee rather than a pure pay-as-you-go model.

At the same time, Oracle Cloud IaaS has a limited selection of instance shapes, there is no SSD storage yet or guaranteed input/output performance levels, and transferring data is more challenging for large-volume migrations.

 

What to expect from Oracle Cloud PaaS

Oracle’s PaaS offerings are quickly becoming among the most comprehensive cloud-based services for Oracle Database. They include:

 

Oracle Database Schema Service

This is the entry-level unmetered offering, available starting at $175 a month for a 5GB database schema limit. Tenants share databases but are isolated in their own schemas. This means you have no control over database parameters, only the schema objects created. This service is currently available only with Oracle Database 11g Release 2 (i.e., it is not yet included in the latest release of Oracle Database 12c).

 

Oracle Exadata Cloud Service

This is a hosted service with monthly subscriptions starting at $70,000 for a quarter rack with 28 OCPUs enabled and 42TB of usable storage provisioned. You have full root OS access and SYSDBA database access, so you have total flexibility in managing your environment. However, this means Oracle manages only the bare minimum—the external networking and physical hardware—so you may end up expending the same effort as you would managing Exadata on-premises.

 

Oracle Database Virtual Image Service

This is a Linux VM with pre-installed Oracle Database software. The license is included in the rate. It’s available metered (priced per OCPU per hour of runtime) and unmetered (priced per OCPU allocated per month). As you’ll need to manage everything up from the VM level, including OS management and full DBA responsibilities, the metered service is a particularly good option for running production environments that require full control over the database deployment.

 

Oracle Database-as-a-Service (DBaaS)

This is an extension of Virtual Image Service and includes additional automation for database provisioning during service creation, backup, recovery, and patching. While you are still responsible for the complete management of the environment, the embedded automation and tooling can simplify some DBA tasks.

I should point out that, with the exception of Oracle Database Schema Service, these are not “true” PaaS offerings; they function more like IaaS-style services but with database software licenses included. But this is on the way, as Oracle recently announced plans for a fully managed DBaaS offering  similar to the one available through AWS.

 

While Oracle’s cloud options are still quite new and require additional features for broad enterprise adoption, if this option sparks your interest, now is the time to take the first steps. If you want to learn more about the migration path to Oracle Cloud, check out our white paper, Migrating Oracle Databases to Cloud.

migratingtocloud

Categories: DBA Blogs

FBDA -- 3 : Support for TRUNCATEs

Hemant K Chitale - Mon, 2016-04-04 10:13
One of the points in the previous post was how ADD and DROP Column commands are supported in FBDA.

Let's look at support for TRUNCATEs.

(Just to prove that I am not using the Undo tablespace for the AS OF query, I drop the Undo tablespace) :

SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> show parameter spfile

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string +DATA/NONCDB/PARAMETERFILE/spf
ile.267.896483727
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> create undo tablespace undotbs2;

Tablespace created.

SQL> alter system set undo_tablespace='UNDOTBS2';

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> drop tablespace undotbs1 including contents and datafiles;

Tablespace dropped.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS2
SQL>


I now TRUNCATE my test table and then run an AS OF query against it.

SQL> connect hemant/hemant
Connected.
SQL> select count(*) from test_fbda;

COUNT(*)
----------
1000

SQL> truncate table test_fbda;

Table truncated.

SQL> select count(*) from test_fbda;

COUNT(*)
----------
0

SQL> select count(*) from test_fbda as of timestamp (sysdate-1);

COUNT(*)
----------
1000

SQL>


The AS OF query was satisfied by the FBDA History table  (and not from Undo).

However, because of yesterday's test of ADD and DROP columns (see yesterday's post), the table structure doesn't match AS OF yesterday.  So, I work around the difference by naming the columns.

SQL> insert into test_fbda select * from test_fbda as of timestamp (sysdate-1);
insert into test_fbda select * from test_fbda as of timestamp (sysdate-1)
*
ERROR at line 1:
ORA-00913: too many values


SQL>
SQL> desc test_fbda
Name Null? Type
----------------------------------------- -------- ----------------------------
ID_COLUMN NUMBER
DATA_COLUMN VARCHAR2(15)
DATE_INSERTED DATE

SQL> insert into test_fbda
2 select id_column, data_column, date_inserted
3 from test_fbda as of timestamp (sysdate-1);

1000 rows created.

SQL> commit;

Commit complete.

SQL>


Thus, when my query matches the table structure, I can use the data from the previous version of the query.

Note how I did NOT use the SYS_FBA_HIST_93250 table as I'd shown in my previous posts.  The History table is NOT to be accessed directly but I demonstrated queries against it it to show the underlying architecture in FBDA.
.
.
.



Categories: DBA Blogs

Oracle Apps DBA :Installing EBS 12.2 ? Check this post

Online Apps DBA - Mon, 2016-04-04 08:33

 This post is related to Installation of Oracle E-Business Suite R12  (12.2 in specific) where you use StartCD to start Installation of Oracle E-Business Suite from root user. If you are looking for steps to install Oracle E-Business Suite R12 then look at Atul’s post on R 12.2 installation here Every now and then Oracle […]

The post Oracle Apps DBA :Installing EBS 12.2 ? Check this post appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Rittman Mead at Collaborate 16: Data Integration Focus

Rittman Mead Consulting - Mon, 2016-04-04 04:59

It’s that time of year again when Oracle technologists from around the world gather in Las Vegas, Nevada, to teach, learn, and, of course, network with their peers. The Collaborate 16 conference, running for 10 years now, has been a collaboration, if you will, between the Independent Oracle Users Group (IOUG), Oracle Applications Users Group (OAUG), and Quest International Users Group (Quest), making it one of the largest user group conferences in the world. Rittman Mead will once again be in attendance, with two data integration focused presentations by me over the course of the week.

My first session at Collaborate 16, “A Walk Through the Kimball ETL Subsystems with Oracle Data Integration,” scheduled for Monday, April 11, at 10:30 a.m., will focus on how we can implement the ETL Subsystems using Oracle Data Integration solutions. As you know, Big Data integration has been the hot topic over the past few years, and it’s an excellent feature in the Oracle Data Integration product suite (Oracle Data Integrator, GoldenGate, & Enterprise Data Quality). But not all analytics require big data technologies, such as labor cost, revenue, or expense reporting. Ralph Kimball, dimensional modeling and data warehousing expert and founder of The Kimball Group, spent much of his career working to build an enterprise data warehouse methodology that can meet these reporting needs. His book, “The Data Warehouse ETL Toolkit,” is a guide for many ETL developers. This session will walk you through his ETL Subsystem categories: Extracting, Cleaning & Conforming, Delivering, and Managing, describing how the Oracle Data Integration products are perfectly suited for the Kimball approach.

I go into further detail on one of the ETL Subsystems in an upcoming IOUG Select Journal article, titled “Implement an Error Event Schema with Oracle Data Integrator.” The Select Journal is a technical magazine published quarterly and available exclusively to IOUG members. My recent post Data Integration Tips: ODI 12c Repository Query – Find the Mapping Target Table shows a bit of the detail behind the research performed for the article.

error-event-schema

If you’re not familiar with the Kimball approach to data warehousing, I definitely would recommend reading one (or more) of their published books on the subject. I would also recommend attending one of their training courses, but unfortunately for the data warehousing community, the Kimball Group has closed shop as of December 2015. But hey, the good news is that two of the former Kimball team members have joined forces at Decision Works, and they offer the exact same training they used to deliver under The Kimball Group name.

GoldenGate to Kafka logo

On Thursday, April 14, at 11 a.m., I will dive into the recently released Oracle GoldenGate for Big Data 12.2 in a session titled “Oracle GoldenGate and Apache Kafka: A Deep Dive into Real-Time Data Streaming.” The challenge for us as data integration professionals is to combine relational data with other non-structured, high volume and rapidly changing datasets, known in the industry as Big Data, and transform it into something useful. Not just that, but we must also do it in near real-time and using a big data target system such as Hadoop. The topic of this session, real-time data streaming, provides us a great solution for that challenging task. By combining GoldenGate, Oracle’s premier data replication technology, and Apache Kafka, the latest open-source streaming and messaging system for big data, we can implement a fast, durable, and scalable solution.

If you plan to be at Collaborate 16 next week, feel free to drop me a line in the comments, via email at michael.rainey@rittmanmead.com, or on Twitter @mRainey. I’d love to meet up and have a discussion around my presentation topics, data integration, or really anything we’re doing at Rittman Mead. Hope to see you all there!

The post Rittman Mead at Collaborate 16: Data Integration Focus appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Clone GoldenGate 12.2

Michael Dinh - Mon, 2016-04-04 04:16

This is what insomnia does to you.

Since GoldenGate installation is now using runInstaller, I would expect the same functionality as the database; hence, cloning GoldenGate Home.

oracle@arrow:pooh:/home/oracle
$ cd /u01/app/12.2.0.1/ggs01/

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs01
$ ls
bcpfmt.tpl            ddl_ora10.sql             defgen                           extract        libggnnzitp.so      marker_status.sql       role_setup.sql
bcrypt.txt            ddl_ora10upCommon.sql     deinstall                        freeBSD.txt    libggparam.so       mgr                     sequence.sql
cachefiledump         ddl_ora11.sql             demo_more_ora_create.sql         ggcmd          libggperf.so        notices.txt             server
cfgtoollogs           ddl_ora9.sql              demo_more_ora_insert.sql         ggMessage.dat  libggrepo.so        oggerr                  sqlldr.tpl
checkprm              ddl_pin.sql               demo_ora_create.sql              ggparam.dat    libicudata.so.48    OPatch                  srvm
chkpt_ora_create.sql  ddl_remove.sql            demo_ora_insert.sql              ggsci          libicudata.so.48.1  oraInst.loc             tcperrs
convchk               ddl_session1.sql          demo_ora_lob_create.sql          help.txt       libicui18n.so.48    oui                     ucharset.h
convprm               ddl_session.sql           demo_ora_misc.sql                install        libicui18n.so.48.1  params.sql              ulg.sql
db2cntl.tpl           ddl_setup.sql             demo_ora_pk_befores_create.sql   inventory      libicuuc.so.48      prvtclkm.plb            UserExitExamples
ddl_cleartrace.sql    ddl_status.sql            demo_ora_pk_befores_insert.sql   jdk            libicuuc.so.48.1    prvtlmpg.plb            usrdecs.h
ddl_create.sql        ddl_staymetadata_off.sql  demo_ora_pk_befores_updates.sql  keygen         libxerces-c.so.28   prvtlmpg_uninstall.sql  zlib.txt
ddl_ddl2file.sql      ddl_staymetadata_on.sql   diagnostics                      label.sql      libxml2.txt         remove_seq.sql
ddl_disable.sql       ddl_tracelevel.sql        dirout                           libantlr3c.so  logdump             replicat
ddl_enable.sql        ddl_trace_off.sql         dirwww                           libdb-6.1.so   marker_remove.sql   retrace
ddl_filter.sql        ddl_trace_on.sql          emsclnt                          libgglog.so    marker_setup.sql    reverse

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs01
$ nohup tar -cvpf /tmp/ggs12c_${USER}_`hostname -s`_clone.tar . > /tmp/clone_${USER}_`hostname -s`_ggs12c.log 2>&1 &
[1] 10859
oracle@arrow:pooh:/u01/app/12.2.0.1/ggs01
$
[1]+  Done                    nohup tar -cvpf /tmp/ggs12c_${USER}_`hostname -s`_clone.tar . > /tmp/clone_${USER}_`hostname -s`_ggs12c.log 2>&1

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs01
$ cd ..
oracle@arrow:pooh:/u01/app/12.2.0.1
$ mkdir ggs02
oracle@arrow:pooh:/u01/app/12.2.0.1
$ cd ggs02/
oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$ ls

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$ nohup tar -xvf /tmp/ggs12c_oracle_arrow_clone.tar > /tmp/untar_${USER}_`hostname -s`_ggs12c.log 2>&1 &
[1] 10897
oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$
[1]+  Done                    nohup tar -xvf /tmp/ggs12c_oracle_arrow_clone.tar > /tmp/untar_${USER}_`hostname -s`_ggs12c.log 2>&1

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$ ll /tmp/*ggs*
-rw-r--r--. 1 oracle oinstall     84131 Apr  4 01:36 /tmp/clone_oracle_arrow_ggs12c.log
-rw-r--r--. 1 oracle oinstall 835788800 Apr  4 01:36 /tmp/ggs12c_oracle_arrow_clone.tar
-rw-r--r--. 1 oracle oinstall     84131 Apr  4 01:45 /tmp/untar_oracle_arrow_ggs12c.log

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$ tail /tmp/clone_oracle_arrow_ggs12c.log
./jdk/bin/javadoc
./jdk/bin/jarsigner
./jdk/bin/pack200
./jdk/bin/rmid
./jdk/bin/jrunscript
./jdk/bin/extcheck
./jdk/bin/keytool
./jdk/LICENSE
./jdk/.manifest
./ddl_setup.sql

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$ tail /tmp/untar_oracle_arrow_ggs12c.log
./jdk/bin/javadoc
./jdk/bin/jarsigner
./jdk/bin/pack200
./jdk/bin/rmid
./jdk/bin/jrunscript
./jdk/bin/extcheck
./jdk/bin/keytool
./jdk/LICENSE
./jdk/.manifest
./ddl_setup.sql

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$ ls
bcpfmt.tpl            ddl_ora10.sql             defgen                           extract        libggnnzitp.so      marker_status.sql       role_setup.sql
bcrypt.txt            ddl_ora10upCommon.sql     deinstall                        freeBSD.txt    libggparam.so       mgr                     sequence.sql
cachefiledump         ddl_ora11.sql             demo_more_ora_create.sql         ggcmd          libggperf.so        notices.txt             server
cfgtoollogs           ddl_ora9.sql              demo_more_ora_insert.sql         ggMessage.dat  libggrepo.so        oggerr                  sqlldr.tpl
checkprm              ddl_pin.sql               demo_ora_create.sql              ggparam.dat    libicudata.so.48    OPatch                  srvm
chkpt_ora_create.sql  ddl_remove.sql            demo_ora_insert.sql              ggsci          libicudata.so.48.1  oraInst.loc             tcperrs
convchk               ddl_session1.sql          demo_ora_lob_create.sql          help.txt       libicui18n.so.48    oui                     ucharset.h
convprm               ddl_session.sql           demo_ora_misc.sql                install        libicui18n.so.48.1  params.sql              ulg.sql
db2cntl.tpl           ddl_setup.sql             demo_ora_pk_befores_create.sql   inventory      libicuuc.so.48      prvtclkm.plb            UserExitExamples
ddl_cleartrace.sql    ddl_status.sql            demo_ora_pk_befores_insert.sql   jdk            libicuuc.so.48.1    prvtlmpg.plb            usrdecs.h
ddl_create.sql        ddl_staymetadata_off.sql  demo_ora_pk_befores_updates.sql  keygen         libxerces-c.so.28   prvtlmpg_uninstall.sql  zlib.txt
ddl_ddl2file.sql      ddl_staymetadata_on.sql   diagnostics                      label.sql      libxml2.txt         remove_seq.sql
ddl_disable.sql       ddl_tracelevel.sql        dirout                           libantlr3c.so  logdump             replicat
ddl_enable.sql        ddl_trace_off.sql         dirwww                           libdb-6.1.so   marker_remove.sql   retrace
ddl_filter.sql        ddl_trace_on.sql          emsclnt                          libgglog.so    marker_setup.sql    reverse

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$ cd oui/bin/
oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02/oui/bin
$ ls run*
runConfig.sh  runInstaller  runInstaller.sh  runSSHSetup.sh

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02/oui/bin
$ ./runInstaller -clone -silent -noconfig -defaultHomeName ORACLE_HOME="/u01/app/12.2.0.1/ggs02"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4072 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-04-04_01-50-47AM. Please wait ...oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02/oui/bin
$ Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2016-04-04_01-50-47AM.log
.................................................................................................... 100% Done.



Installation in progress (Monday, April 4, 2016 1:50:56 AM PDT)
..........................................................................                                                      72% Done.
Install successful

Linking in progress (Monday, April 4, 2016 1:50:59 AM PDT)
Link successful

Setup in progress (Monday, April 4, 2016 1:50:59 AM PDT)
...........                                                     100% Done.
Setup successful

End of install phases.(Monday, April 4, 2016 1:51:20 AM PDT)
The cloning of OraHome3 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2016-04-04_01-50-47AM.log' for more details.

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02/oui/bin
$ grep -e '[[:upper:]]: ' /u01/app/oraInventory/logs/cloneActions2016-04-04_01-50-47AM.log|cut -d ":" -f1|sort -u
INFO

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02/oui/bin
$ cd ../../

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 12 2015 00:54:38
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.


GGSCI (arrow.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED

GGSCI (arrow.localdomain) 2> show

Parameter settings:

SET SUBDIRS    ON
SET DEBUG      OFF

Current directory: /u01/app/12.2.0.1/ggs02

Using subdirectories for all process files

Editor:  vi

Reports (.rpt)                 /u01/app/12.2.0.1/ggs02/dirrpt -- does not yet exist
Parameters (.prm)              /u01/app/12.2.0.1/ggs02/dirprm -- does not yet exist
Replicat Checkpoints (.cpr)    /u01/app/12.2.0.1/ggs02/dirchk -- does not yet exist
Extract Checkpoints (.cpe)     /u01/app/12.2.0.1/ggs02/dirchk -- does not yet exist
Process Status (.pcs)          /u01/app/12.2.0.1/ggs02/dirpcs -- does not yet exist
SQL Scripts (.sql)             /u01/app/12.2.0.1/ggs02/dirsql -- does not yet exist
Database Definitions (.def)    /u01/app/12.2.0.1/ggs02/dirdef -- does not yet exist
Dump files (.dmp)              /u01/app/12.2.0.1/ggs02/dirdmp -- does not yet exist
Masterkey wallet files (.wlt)  /u01/app/12.2.0.1/ggs02/dirwlt -- does not yet exist
Credential store files (.crd)  /u01/app/12.2.0.1/ggs02/dircrd -- does not yet exist


GGSCI (arrow.localdomain) 3> create subdirs

Creating subdirectories under current directory /u01/app/12.2.0.1/ggs02

Parameter files                /u01/app/12.2.0.1/ggs02/dirprm: created
Report files                   /u01/app/12.2.0.1/ggs02/dirrpt: created
Checkpoint files               /u01/app/12.2.0.1/ggs02/dirchk: created
Process status files           /u01/app/12.2.0.1/ggs02/dirpcs: created
SQL script files               /u01/app/12.2.0.1/ggs02/dirsql: created
Database definitions files     /u01/app/12.2.0.1/ggs02/dirdef: created
Extract data files             /u01/app/12.2.0.1/ggs02/dirdat: created
Temporary files                /u01/app/12.2.0.1/ggs02/dirtmp: created
Credential store files         /u01/app/12.2.0.1/ggs02/dircrd: created
Masterkey wallet files         /u01/app/12.2.0.1/ggs02/dirwlt: created
Dump files                     /u01/app/12.2.0.1/ggs02/dirdmp: created


GGSCI (arrow.localdomain) 4> exit
oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$

There is no clone.pl for GoldenGate.
oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$ locate clone.pl
/u01/app/oracle/product/11.2.0/dbhome_1/clone/bin/clone.pl
/u01/app/oracle/product/11.2.0/dbhome_1/clone/bin/prepare_clone.pl
/u01/app/oracle/product/11.2.0/dbhome_1/sysman/admin/scripts/db/dbclone/db_clone.pl
/u01/app/oracle/product/12.2.0/dbhome_2/clone/bin/clone.pl
/u01/app/oracle/product/12.2.0/dbhome_2/clone/bin/clone.pl.sbs
/u01/app/oracle/product/12.2.0/dbhome_2/clone/bin/clone.pl.sbs.ouibak
/u01/app/oracle/product/12.2.0/dbhome_2/clone/bin/prepare_clone.pl
/u01/app/oracle/product/12.2.0/dbhome_2/inventory/Templates/clone/bin/clone.pl.sbs
oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$

oracle@arrow:pooh:/u01/app/12.2.0.1/ggs02
$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version=”1.0″ standalone=”yes” ?>
<!– Copyright (c) 1999, 2011, Oracle. All rights reserved. –>
<!– Do not modify the contents of this file by hand. –>
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>11.2.0.3.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME=”OraDb11g_home1″ LOC=”/u01/app/oracle/product/11.2.0/dbhome_1″ TYPE=”O” IDX=”1″/>
<HOME NAME=”OraDB12Home1″ LOC=”/u01/app/oracle/product/12.2.0/dbhome_1″ TYPE=”O” IDX=”2″/>
<HOME NAME=”OraHome1″ LOC=”/u01/app/oracle/product/12.2.0/dbhome_2″ TYPE=”O” IDX=”3″/>
<HOME NAME=”OraHome2″ LOC=”/u01/app/12.2.0.1/ggs01″ TYPE=”O” IDX=”4″/>
<HOME NAME=”OraHome3″ LOC=”/u01/app/12.2.0.1/ggs02″ TYPE=”O” IDX=”5″/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>


Machine Learning for Oracle Developers

Gerger Consulting - Mon, 2016-04-04 03:51
Oracle doesn't offer any solutions for machine learning. It is time for Oracle developers to look elsewhere.

If you don't know where to start, attend our webinar and find out how you can apply Numenta's open source machine learning technology to real world enterprise IT problems.



Categories: Development

Oracle Cloud Machine - Move the Cloud to your Data Center

While public cloud computing would make a significant difference in your business, handing the governance and control to someone else isn't always simply an option. The cloud is generally perceived...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Silent Install GoldenGate 12.2

Michael Dinh - Mon, 2016-04-04 03:31

There are probably many blogs about installation; however, I tend to do things a little different.

oracle@arrow:pooh:/media/sf_working/fbo_ggs_Linux_x64_shiphome/Disk1
$ cat /media/sf_working/oggcore_11gdb.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_ogginstall_response_schema_v12_1_2
INSTALL_OPTION=ORA11g
SOFTWARE_LOCATION=/u01/app/12.2.0.1/ggs01
START_MANAGER=
MANAGER_PORT=
DATABASE_LOCATION=/u01/app/oracle/product/11.2.0/dbhome_1
INVENTORY_LOCATION=/u01/app/oraInventory
UNIX_GROUP_NAME=oinstall

I like to see installation progress.

oracle@arrow:pooh:/media/sf_working/fbo_ggs_Linux_x64_shiphome/Disk1
$ ./runInstaller -silent -showProgress -waitforcompletion -responseFile /media/sf_working/oggcore_11gdb.rsp
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 28152 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 4073 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-04-04_01-04-39AM. Please wait ...You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2016-04-04_01-04-39AM.log

Prepare in progress.
..................................................   10% Done.

Prepare successful.

Copy files in progress.
..................................................   61% Done.
..................................................   69% Done.
..................................................   77% Done.
..................................................   82% Done.
..................................................   87% Done.
..................................................   94% Done.
..................................................   95% Done.

Copy files successful.

Link binaries in progress.
..................................................   95% Done.

Link binaries successful.

Setup files in progress.
..................................................   95% Done.
..................................................   96% Done.
..................................................   97% Done.
..................................................   98% Done.
..................................................   99% Done.
..................................................   100% Done.

Setup files successful.
The installation of Oracle GoldenGate Core was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2016-04-04_01-04-39AM.log' for more details.
Successfully Setup Software.


oracle@arrow:pooh:/media/sf_working/fbo_ggs_Linux_x64_shiphome/Disk1
$ cat /u01/app/oraInventory/logs/silentInstall2016-04-04_01-04-39AM.log
silentInstall2016-04-04_01-04-39AM.log
The installation of Oracle GoldenGate Core was successful.

I like to see distinct log type: INFO|WARNING|ERROR.

oracle@arrow:pooh:/media/sf_working/fbo_ggs_Linux_x64_shiphome/Disk1
$ grep -e '[[:upper:]]: ' /u01/app/oraInventory/logs/installActions2016-04-04_01-04-39AM.log|cut -d ":" -f1|sort -u
INFO
WARNING

I like to see what WARNING are all about and to compare with subsequent install.

oracle@arrow:pooh:/media/sf_working/fbo_ggs_Linux_x64_shiphome/Disk1
$ grep -e '[[:upper:]]: ' /u01/app/oraInventory/logs/installActions2016-04-04_01-04-39AM.log|grep "^WARNING: "
WARNING: CVU is not enabled. No CVU based operation will be performed.
WARNING: Unable to find the namespace URI. Reason: Start of root element expected.
WARNING: Unable to find the namespace URI. Reason: Start of root element expected.
WARNING: Unable to get the value for INSTALL_OPTION. Error: null
WARNING: Unable to get the value for INSTALL_OPTION. Error: null
WARNING: Unable to get the value for INSTALL_OPTION. Error: null
WARNING: Validation disabled for the state init
WARNING: Validation disabled for the state summary
WARNING: Validation disabled for the state finish

Is WARNING bogus?

oracle@arrow:pooh:/media/sf_working/fbo_ggs_Linux_x64_shiphome/Disk1
$ grep INSTALL_OPTION /u01/app/oraInventory/logs/installActions2016-04-04_01-04-39AM.log
WARNING: Unable to get the value for INSTALL_OPTION. Error: null
WARNING: Unable to get the value for INSTALL_OPTION. Error: null
WARNING: Unable to get the value for INSTALL_OPTION. Error: null
INFO: Setting value for the property:INSTALL_OPTION in the bean:OGGInstallSettings
 INSTALL_OPTION                                         ORA11g
 INSTALL_OPTION            ORA11g
INFO: adding the variable INSTALL_OPTION to command line args table
oracle@arrow:pooh:/media/sf_working/fbo_ggs_Linux_x64_shiphome/Disk1
$

Deinstall GoldenGate 12.2

Michael Dinh - Mon, 2016-04-04 02:59

I always like to know how to remove software installation in the event I am not happy with how it was installed.

GoldenGate Home is at /u01/app/oracle/12.2/ggs01

Thinking of may be using the same directory structures as Grid Infrastructure, i.e. /u01/app/12.2.0.1/ggs01 with full release version.

oracle@arrow:pooh:/u01/app/oracle/12.2/ggs01
$ ls -ld deinstall/
drwxr-xr-x. 2 oracle oinstall 4096 Apr  4 00:24 deinstall/

oracle@arrow:pooh:/u01/app/oracle/12.2/ggs01
$ cd deinstall/

oracle@arrow:pooh:/u01/app/oracle/12.2/ggs01/deinstall
$ ls
deinstall.sh

oracle@arrow:pooh:/u01/app/oracle/12.2/ggs01/deinstall
$ ./deinstall.sh

ALERT: Ensure all the processes running from the current Oracle Home are shutdown prior to running this software uninstallation script.

Proceed with removing Oracle GoldenGate home: /u01/app/oracle/12.2/ggs01 (yes/no)? [no]
yes
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4073 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-04-04_12-48-07AM. Please wait ...Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

Starting deinstall


Deinstall in progress (Monday, April 4, 2016 12:48:12 AM PDT)
............................................................... 100% Done.

Deinstall successful

End of install phases.(Monday, April 4, 2016 12:48:33 AM PDT)
End of deinstallations
Please check '/u01/app/oraInventory/logs/silentInstall2016-04-04_12-48-07AM.log' for more details.

oracle@arrow:pooh:/u01/app/oracle/12.2/ggs01/deinstall
$ cat /u01/app/oraInventory/logs/silentInstall2016-04-04_12-48-07AM.log
silentInstall2016-04-04_12-48-07AM.log
Starting deinstall
This deinstallation was successful
End of deinstallations
oracle@arrow:pooh:/u01/app/oracle/12.2/ggs01/deinstall
$

Oracle Linux on Oracle Compute Cloud

Pat Shuff - Mon, 2016-04-04 02:07
In this blog we are going to look at creation of an Infrastructure as a Service foundation using Compute as a Service and Storage as a Service to create an Oracle Linux instance. To start with we begin by logging into http://cloud.oracle.com and entering our Identity Domain, user name, and password. In this example we are connecting to metcsgse00026 as cloud.admin for the username.

If we look to the right of the Compute Cloud Service header we see a "Service Console" tab. Clicking on this allows us to create a new virtual machine by clicking on the "Create Instance" button. Not all accounts will have the create instance button. Your account needs to have the funding behind generic compute and the ability to consume either metered or un-metered IaaS options.

Note that we have two virtual machines that have previously been created. The first listed is a database service that was created. The compute infrastructure is automatically created when we create a database as a service instance. The second listed is a Java service that was created through the Java Service console. The compute infrastructure was also created for the JaaS option. We can drill into these compute services to look at security, networking, and ip addresses assigned.

To create a virtual machine we click on the "Create Instance" button which takes us to the next screen. On this screen we enter the name of the virtual machine that we are creating, a description label, the operating system instance and type, the shape of the instance. By shape we mean the number of processors and memory since this is how compute shapes are priced.

To select the different types of operating systems, we can enter a "*" into the Image type and it lists a pull down of operating system types. You can scroll down to select different configurations and instances. In the screen shot below we see that we are selecting OEL 6.4 with a 5 GB root directory. The majority of the images are generic Linux instances with different disk configurations, different software packages installed, and different OS configurations.

The next step is to select the processor and memory size. Again, this is a pull down menu with a pre-configured set of options. We can select from 1, 2, 4, 8, and 16 virtual processors and either 15 GB of RAM or 30 GB of RAM per processor. These options might be a bit limiting for some applications or operations but are optimized and configured for database and java servers.

In this example we selected a 1 virtual processor, 15 GB of RAM, 5 GB of disk for the operating system, and Oracle Linux 6.4 as the operating system. We can enter tags so that we can associate this virtual machine with a target, production environment, system, or geographic location consuming the resources.

At this time we are not selecting any custom attributes and not using Orchestration to provision services, user accounts, passwords, or other services into our virtual machine. We click the "Next" button at the top of the screen to go to network configurations.

In the network configuration we can accept the defaults and have an ip address assigned to us. If we have an ip address on reserve we can consume that reserved address and even assign a name to it to resolve to linux6.mydomain.net if we wanted to map this to an internet name. In this example we just accept the defaults and try not to get too fancy on our first try. This will create an ip address for our server, open port 22 for ssh access, and allow us to network it to other cloud services inside our instance domain with local network routing.

The next step is to configure a disk to boot from. We are presented with the option of using a pre-configured existing disk or creating a new one. The list that we see here is a list of disks for the database and java servers that we previously created. We are going to click on the create new check box and have the system create the disk for us.

The storage property pull down allows us to select the type of disk that we are going to use. If we are trying to boot from this disk, we should select the default option. If we were installing a database we would select something like the iSCSI option to attach as the data or fast recovery disk options.

The final step is to upload the public key of our ssh key pair. This is probably the biggest differential between the three services. Amazon requires that you use their shared and secret key that they generate for you. Microsoft allows you to create a service without an ssh key and use a username and password to connect to the virtual machine. Oracle requires that you use the ssh public-private key that you generate with puttygen or ssh-keygen. The public key is uploaded during this creation time (or selected if you have previously uploaded the key). The private key is presented via putty or ssh connection to the server once it is created. The default users that are created in the Oracle instances are the oracle account that is part of the orainst group and the opc account that has sudo privileges.

Once we have everything entered, we click on next and review the screen. Clicking on the "Create" button will create a compute instance with the configuration and specifications that we requested. In this example we are creating a Linix 6.4 instance onto a 1 OCPU machine with 15 GB of memory and attaching a 5 GB disk drive for the operating system.

As the system is provisioning, it updates us on progress

When everything is done we can view the details of the virtual machine and see the ip address, which key was used, and how the service is configured.

Before we can attach to the server, we need to open up the ssh port (port 22). This is done by going into the Network tab and adding a "Security Rule". This rule needs to be defined as public internet access to our default security rule since we accepted the default network rules as our network protocol in the creation of the virtual machine.

Note in this example we selected ssh as the protocol, public internet as the source, and default as the destination. With this we can now connect using putty from our Windows desktop. We need to configure putty with the ip address of our virtual machine and our private key as the ssh protocol for connecting. We could create a tunnel for other services but in this example we are just trying to get a command line login to the operating system.

Note that we can confirm that we are running Linux 6.4 and have a 5 GB hard drive for the operating system. The whole process takes a few minutes. This is relatively fast and can be scripted with curl commands. More on that later.

Links for 2016-04-03 [del.icio.us]

Categories: DBA Blogs

OEM and Passwords

Anthony Shorten - Sun, 2016-04-03 18:33

I wanted to outline an interesting experience I had recently around security. Oracle, like a lot of companies requires their employees to regularly change their passwords as it is considered good security practice. There are strict rules around the password formats and their history. Luckily Oracle uses its own Identity Management solutions so the experience is simple and quick.

Recently my passwords were set to expire. I have a process I use to ensure the passwords are changed across all the technologies I use. I usually do that one morning a couple of days before they are due to expire. I did that this time at the end of the day, as it was a particularly busy day. It was a Friday and all was well..

Except I forgot one important change. My credentials in my demonstration instance of Oracle Enterprise Manager. I have a demonstration environment where I do research and development as well as record training and do demonstration against. After that weekend I logged to my demonstration environment to see alerts that it could not connect via some credentials.

I have three credentials to worry about in Oracle Enterprise Manager:

  • There is a credential for Oracle Enterprise Manager to connect to My Oracle Support. This is used for checking patches, looking for critical advice as well as register Service Requests directly from Oracle Enterprise Manager in online mode. Typically, you would nominate an account to link to My Oracle Support (along with a Service Identifier for your site).
  • I have two named credentials I use regularly for host interaction such as installations and running regular jobs on the machines. These are administration accounts used for the product at the operating system level. The way the machine is setup, I use two as one is the Administration account and the other is a privileged account used for low level administration. In some cases some sites will only need one per user.

I was able to correct the passwords and all my environments reported back correctly. Credential management is one of the strengths of Oracle Enterprise Manager. Next time I will add the OEM credentials to my checklist.

Partner Webcast – Oracle Mobile Strategy and Mobility Offerings Overview

Mobile is on the mind of every business, as Mobile Is the New First Screen. Enabling existing business applications on handhelds can be very challenging, but even building new business applications...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Three more principles

Andrew Clarke - Sun, 2016-04-03 13:00
Here are some more principles which can help us design better programs. These principles aren't part of an organized theory, and they're aren't particularly related to any programming paradigm. But each is part of the canon, and each is about the relationship between a program's interface and its implementation.
The Principle Of Least AstonishmentAlso known as the Principle of Least Surprise, the rule is simple: programs should do what we expect them to do. This is more than simply honouring the contract of the interface. It means complying with accepted conventions of our programming. In PL/SQL programming there is a convention that functions are read-only, or at least do not change database state. Another such convention is that low-level routines do not execute COMMIT statements; transaction management is the prerogative of the program at the top of the call stack, which may be interacting directly with a user or may be an autonomous batch process.

Perhaps the most common flouting of the Principle Of Least Astonishment is this:

   exception
when others then
null;

It is reasonable to expect that a program will hurl an exception if something as gone awry. Unfortunately, we are not as astonished as we should be when we find a procedure with an exception handle which swallows any and every exception.
Information Hiding Principle Another venerable principle, this one was expounded by David Parnas in 1972. It requires that a calling program should not need to know anything about the implementation of a called program. The definition of the interface should be sufficient. It is the cornerstone of black-box programming. The virtue of Information Hiding is that knowledge of internal details inevitably leads to coupling between the called and calling routines: when we change the called program we need to change the caller too. We honour this principle any time we call a procedure in a package owned by another schema, because the EXECUTE privilege grants visibility of the package specification (the interface) but not the body (the implementation).
The Law Of Leaky AbstractionsJoel Spolsky coined this one: "All non-trivial abstractions, to some degree, are leaky." No matter how hard we try, some details of the implementation of a called program will be exposed to the calling programming, and will need to be acknowledged. Let's consider this interface again:

    function get_employee_recs
( p_deptno in number )
return emp_refcursor;

We know it returns a result set of employee records. But in what order? Sorting by EMPNO would be pretty useless, given that it is a surrogate key (and hence without meaning). Other candidates - HIREDATE, SAL - will be helpful for some cases and irrelevant for others. One approach is to always return an unsorted set and leave it to the caller to sort the results; but it is usually more efficient to sort records in a query rather than a collection. Another approach would be to write several functions - get_employee_recs_sorted_hiredate(), get_employee_recs_sorted_sal() - but that leads to a bloated interface which is hard to understand. Tricky.
ConclusionPrinciples are guidelines. There are tensions between them. Good design is a matter of trade-offs. We cannot blindly follow Information Hiding and ignore the Leaky Abstractions. We need to exercise our professional judgement (which is a good thing).

Part of the Designing PL/SQL Programs series

It's all about the interface

Andrew Clarke - Sun, 2016-04-03 12:59
When we talk about program design we're mainly talking about interface design. The interface is the part of our program that the users interact with. Normally discussion of UI focuses on GUI or UX, that is, the interface with the end user of our application.

But developers are users too.

Another developer writing a program which calls a routine in my program is a user of my code (and, I must remember, six months after I last touched the program, I am that other developer). A well-designed interface is frictionless: it can be slotted into a calling program without too much effort. A poor interface breaks the flow: it takes time and thought to figure it out. In the worst case we have to scramble around in the documentation or the source code.

Formally, an interface is the mechanism which allows the environment (the user or agent) to interact with the system (the program). What the system actually does is the implementation: the interface provides access to the implementation without the environment needing to understand the details. In PL/SQL programs the implementation will usually contain a hefty chunk of SQL. The interface mediates access to data.

An interface is a contract. It specifies what the caller must do and what the called program will do in return. Take this example:

function get_employee_recs
     ( p_deptno in number )
     return emp_refcursor;

The contract says, if the calling program passes a valid DEPTNO the function will return records for all the employees in that department, as a strongly-typed ref cursor. Unfortunately the contract doesn't say what will happen if the calling program passes an invalid DEPTNO. Does the function return an empty set or throw an exception? The short answer is we can't tell. We must rely on convention or the document, which is an unfortunate gap in the PL/SQL language; the Java keyword throws is quite neat in this respect.
The interface is here to helpThe interface presents an implementation of business logic. The interface is a curated interpretation, and doesn't enable unfettered access. Rather, a well-designed interface helps a developer use the business logic in a sensible fashion. Dan Lockton calls this Design With Intent: Good design expresses how a product should be used. It doesn't have to be complicated. We can use simple control mechanisms which to help other developers use our code properly.
Restriction of accessSimply, the interface restricts access to certain functions or denies it altogether. Only certain users are allowed to view salaries, and even fewer to modify them. The interface to Employee records should separate salary functions from more widely-available functions. Access restriction can be implemented in a hard fashion, using architectural constructs (views, packages, schemas) or in a soft fashion (using VPD or Data Vault). The hard approach benefits from clarity, the soft approach offers flexibility.
Forcing functionsIf certain things must be done in a specific order then the interface should only offer a method which enforces the correct order. For instance, if we need to insert records into a parent table and a child table in the same transaction (perhaps a super-type/sub-type implementation of a foreign key arc) a helpful interface will only expose a procedure which inserts both records in the correct order.
Mistake-proofingA well-design interface prevents its users from making obvious mistakes. The signature of a procedure should be clear and unambiguous. Naming is important. If a parameter presents a table attribute the parameter name should echo the column name: p_empno is better than p_id. Default values for parameters should lead developers to sensible and safe choices. If several parameters have default values they must play nicely together: accepting all the defaults should not generate an error condition.
AbstractionAbstraction is just another word for interface. It allows us to focus on the details of our own code without need to understand the concrete details of the other code we depend upon. That's why good interfaces are the key to managing large codebases.

Part of the Designing PL/SQL Programs series

Working with the Interface Segregation Principle

Andrew Clarke - Sun, 2016-04-03 12:55
Obviously Interface Segregation is crucial for implementing restricted access. For any given set of data there are three broad categories of access:

  • reporting 
  • manipulation 
  • administration and governance 

So we need to define at least one interface - packages - for each category in order that we can grant the appropriate access to different groups of users: read-only users, regular users, power users.

But there's more to Interface Segregation. This example is based on a procedure posted on a programming forum. Its purpose is to maintain medical records relating to a patient's drug treatments. The procedure has some business logic (which I've redacted) but its overall structure is defined by the split between the Verification task and the De-verification task, and flow is controlled by the value of the p_verify_mode parameter.
 
procedure rx_verification
(p_drh_id in number,
p_patient_name in varchar2,
p_verify_mode in varchar2)
as
new_rxh_id number;
rxh_count number;
rxl_count number;
drh_rec drug_admin_history%rowtype;
begin
select * into drh_rec ....;
select count(*) into rxh_count ....;

if p_verify_mode = 'VERIFY' then

update drug_admin_history ....;
if drh_rec.pp_id <> 0 then
update patient_prescription ....;
end if;
if rxh_count = 0 then
insert into prescription_header ....;
else
select rxh_id into new_rxh_id ....;
end if;
insert into prescription_line ....;
if drh_rec.threshhold > 0
insert into prescription_line ....;
end if;

elsif p_verify_mode = 'DEVERIFY' then

update drug_admin_history ....;
if drh_rec.pp_id <> 0 then
update patient_prescription ....;
end if;
select rxl_rxh_id into new_rxh_id ....;
delete prescription_line ....;
delete prescription_header ....;

end if;
end;
Does this procedure have a Single Responsibility?  Hmmm. It conforms to Common Reuse - users who can verify can also de-verify. It doesn't break Common Closure, because both tasks work with the same tables. But there is a nagging doubt. It appears to be doing two things: Verification and De-verification.

So, how does this does this procedure work as an interface? There is a definite problem when it comes to calling the procedure: how do I as a developer know what value to pass to p_verify_mode?

  rx_management.rx_verification
(p_drh_id => 1234,
p_patient_name => 'John Yaya',
p_verify_mode => ???);
The only way to know is to inspect the source code of the procedure. That breaks the Information Hiding principle, and it might not be viable (if the procedure is owned by a different schema). Clearly the interface could benefit from a redesign. One approach would be to declare constants for the acceptable values; while we're at it, why not define a PL/SQL subtype for verification mode and tweak the procedure's signature to make it clear that's what's expected:         

create or replace package rx_management is

subtype verification_mode_subt is varchar2(10);
c_verify constant verification_mode_subt := 'VERIFY';
c_deverify constant verification_mode_subt := 'DEVERIFY';

procedure rx_verification
(p_drh_id in number,
p_patient_name in varchar2,
p_verify_mode in verification_mode_subt);

end rx_management;
Nevertheless it is still possible for a caller program to pass a wrong value: 

  rx_management.rx_verification
(p_drh_id => 1234,
p_patient_name => 'John Yaya',
p_verify_mode => 'Verify');
What happens then? Literally nothing. The value drops through the control structure without satisfying any condition. It's an unsatisfactory outcome. We could change the implementation of rx_verification() to validate the parameter value and raise and exception. Or we could add an ELSE branch and raise an exception. But those are runtime exceptions. It would be better to mistake-proof the interface so that it is not possible to pass an invalid value in the first place.

Which leads us to to a Segregated Interface :
create or replace package rx_management is

procedure rx_verification
(p_drh_id in number,
p_patient_name in varchar2);

procedure rx_deverification
(p_drh_id in number);

end rx_management;
Suddenly it becomes clear that the original procedure was poorly named (I call rx_verification() to issue an RX de-verification?!)  We have two procedures but their usage is now straightforward and the signatures are cleaner (the p_patient_name is only used in the Verification branch so there's no need to pass it when issuing a De-verification).
SummaryInterface Segregation creates simpler and safer controls but more of them. This is a general effect of the Information Hiding principle. It is a trade-off. We need to be sensible. Also, this is not a proscription against flags. There will always be times when we need to pass instructions to called procedures to modify their behaviour. In those cases it is important that the interface includes a definition of acceptable values.

Part of the Designing PL/SQL Programs series

Introducing the SOLID design principles

Andrew Clarke - Sun, 2016-04-03 12:55
PL/SQL programming standards tend to focus on layout (case of keywords, indentation, etc), naming conventions, and implementation details (such as use of cursors).  These are all important things, but they don't address questions of design. How easy is it to use the written code?  How easy is it to test? How easy will it be to maintain? Is it robust? Is it secure?

Simply put, there are no agreed design principles for PL/SQL. So it's hard to define what makes a well-designed PL/SQL program.
The SOLID principlesIt's different for object-oriented programming. OOP has more design principles and paradigms and patterns than you can shake a stick at. Perhaps the most well-known are the SOLID principles, which were first mooted by Robert C. Martin, AKA Uncle Bob, back in 1995 (although it was Michael Feathers who coined the acronym).

Although Martin put these principles together for Object-Oriented code, they draw on a broader spectrum of programming practice. So they are transferable, or at least translatable, to the other forms of modular programming. For instance, PL/SQL.
Single Responsibility PrincipleThis is the foundation stone of modular programming: a program unit should do only one thing. Modules which do only one thing are easier to understand, easier to test and generally more versatile. Higher level procedures can be composed of lower level ones. Sometimes it can be hard to define what "one thing" means in a given context, but some of the other principles provide clarity. Martin's formulation is that there should be just one axis of change: there's just one set of requirements which, if modified or added to, would lead to a change in the package.
Open/closed PrincipleThe slightly obscure name conceals a straightforward proposal. It means program units are closed to modification but open to extension. If we need to add new functionality to a package, we create a new procedure rather than modifying an existing one. (Betrand Meyer, the father of Design By Contract programming, originally proposed it; in OO programming this principle is implemented through inheritance or polymorphism.) Clearly we must fix bugs in existing code. Also it doesn't rule out refactoring: we can tune the implementation providing we don't change the behaviour. This principle mainly applies to published program units, ones referenced by other programs in Production. Also the principle can be looser when the code is being used within the same project, because we can negotiate changes with our colleagues.
Liskov Substitution PrincipleThis is a real Computer Science-y one, good for dropping in code reviews. Named for Barbara Liskov it defines rules for behavioural sub-typing. If a procedure has a parameter defined as a base type it must be able to take an instance of any sub-type without changing the behaviour of the program. So a procedure which uses
IS OF
to test the type of a passed parameter and do something different is violating Liskov Substitution. Obviously we don't make much use of Inheritance in PL/SQL programming, so this Principle is less relevant than in other programming paradigms.
Interface Segregation PrincipleThis principle is about designing fine-grained interfaces. It is a extension of the Single Responsibility Principle. Instead of build one huge package which contains all the functions relating to a domain build several smaller, more cohesive packages. For example Oracle's Advanced Queuing subsystem comprises five packages, to manage different aspects of AQ. Users who write to or read from queues have
DBMS_AQ
; users who manage queues and subscribers have
DBMS_AQADM
.
Dependency Inversion PrincipleInteractions between programs should be through abstract interfaces rather than concrete ones. Abstraction means the implementation of one side of the interface can change without changing the other side. PL/SQL doesn't support Abstract objects in the way that say Java does. To a certain extent Package Specifications provide a layer of abstraction but there can only be one concrete implementation. Using Types to pass data between Procedures is an interesting idea, which we can use to decouple data providers and data consumers in a useful fashion.
Applicability of SOLID principles in PL/SQLSo it seems like we can apply SOLID practices to PL/SQL.  True, some Principles fit better than others. But we have something which we might use to distinguish good design from bad when it comes to PL/SQL interfaces.

The SOLID principles apply mainly to individual modules. Is there something similar we can use for designing module groups? Why, yes there is. I'm glad you asked.

Part of the Designing PL/SQL Programs series

Pages

Subscribe to Oracle FAQ aggregator