I have been thinking about writing a Pythian blog for long time, and today I finally took the opportunity.
In the DBA life, it’s common to get a request to move the database across servers due to a RDBMS upgrade plan or receiving new hardware. It’s not common, however, to receive a request to move RDBMS Oracle home within the same server. This request may arrive due to improper planning of creating the Oracle home into the root mount point on Unix platforms, and C:\\ drive on Windows platforms(system mountpoint/drive).
The cloning feature introduced by Oracle from the 10gR2 version become handy to work with on this request. The use of clone.pl script on Unix platform is quite straightforward, as we have full control over Unix processes. The thread architecture on Windows platform makes this a bit different, but not complex.
Let’s assume the current Oracle home is located at “C:\\oracle\\product\\11.1.0″ directory, and the new directory planning to move is “D:\\oracle\\product\\11.1.0″ for example. As usual, keep the database name as TEST. The steps below describe the activities required.
Step 1. Log into Windows server as local server user, which is part of local administrator and ora_dba groups. Let the existing Oracle database run and start to copy the entire contents from “C:\\oracle\\product\\11.1.0″ directory to “D:\\oracle\\product\\11.1.0″ directory. Ensure the copy process is completed without any issues.
Step 2. Take existing Oracle home inventory details for reference. Open a command prompt window (Window I) and execute these commands.
Step 3. Open a new command prompt (Window II) and set the environment variables appropriately.
C:\>set PERL5LIB=D:\\oracle\\product\\11.1.0\\perl\\5.8.3\\lib ==> This depends on Perl version exists under oracle home, may differ from version to version.
Step 4. Run the clone.pl script from Window II.
C:\>perl clone.pl ORACLE_HOME=”D:\\oracle\\product\\11.1.0″ ORACLE_HOME_NAME=”OraDB11gR1_home” ORACLE_BASE=”D:\\oracle”
Execution of this command should complete without any issues. Review the log file C:\\Program Files\\Oracle\\Inventory\\logs\\cloneActions<DATE>.log for the verification.
Excerpts from the log file:
INFO: ca page to be shown: false
INFO: exitonly tools to be excuted passed: 0
*** End of Installation Page***
The cloning of OraDB11gR1_home was successful. ==> Should get this message.
Step 5. Execute the following commands from window II for the newly cloned home configuration verfication.
C:\\>opatch version ==> Output should match with the output obtained on step 2.
C:\\>opatch lsinventory ==> Output should match with the output obtained on step 2.
Step 6. Get maximum 15 minutes downtime for the database and bring down the TEST database. Open the services utility and stop “OracleOraDB11g_homeTNSListener” service and “OracleServiceTEST” service.
Step 7. On Window I, execute this command to delete the existing listener service from the server.
Step 8. Execute this command from Window I to delete the existing database services from the server.
C:\\>ORADIM -DELETE -SID TEST
Step 9. Open the services utility and confirmed all the services belongs to old oracle home including “Oracle TEST VSS Writer Service” and “OracleJobSchedulerTEST” are deleted.
Step 10. Invoke Oracle Net Configuration Assistant from Window II to configure new listener service.
Step 11. Create new database service from new oracle home from Window II.
C:\\>ORADIM -NEW -SID TEST -SYSPWD *** -STARTMODE auto -SPFILE
Note: This command would start the database instance too.
Step 12. Open services utility and confirmed the following services got created from new oracle home.Modify the “Startup Type” for these services accordingly.
Oracle TEST VSS Writer Service
Now work with your application administrator, and confirm that everything works fine :)
if you use tnsping.exe and sqlplus.exe, the way the sqlnet.ora and tnsnames.ora are located differs
Let’s take the following setup
C:\tmp>type dir1\sqlnet.ora NAMES.DEFAULT_DOMAIN = (EXAMPLE.COM) NAMES.DIRECTORY_PATH = (TNSNAMES) C:\tmp>type dir1\tnsnames.ora db.example.com=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=srv1.example.com)(PORT=1521))(CONNECT_DATA=(SID=db01))) db.example.org=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=srv1.example.org)(PORT=1521))(CONNECT_DATA=(SID=db01))) C:\tmp>type dir2\sqlnet.ora NAMES.DEFAULT_DOMAIN = (EXAMPLE.ORG) NAMES.DIRECTORY_PATH = (TNSNAMES) C:\tmp>type dir2\tnsnames.ora db.example.com=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=srv2.example.com)(PORT=1521))(CONNECT_DATA=(SID=db02))) db.example.org=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=srv2.example.org)(PORT=1521))(CONNECT_DATA=(SID=db02)))
You set TNS_ADMIN to dir1 and your current directory is dir2.
Let’s try TNSPING.EXE first
C:\tmp>cd dir2 C:\tmp\dir2>set TNS_ADMIN=C:\tmp\dir1 C:\tmp\dir2>tnsping db TNS Ping Utility for 64-bit Windows: Version 22.214.171.124.0 - Production on 25-NOV-2013 15:47:31 Copyright (c) 1997, 2013, Oracle. All rights reserved. Used parameter files: C:\tmp\dir1\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=srv2.example.com)(PORT=1521))(CONNECT_DATA=(SID=db02))) OK (0 msec)
TNSPING.EXE is using the sqlnet.ora in %TNS_ADMIN% directory (EXAMPLE.COM domain) and the tnsnames.ora in the current directory (db02)
Let’s try with sqlplus
C:\tmp>cd dir2 C:\tmp\dir2>set TNS_ADMIN=C:\tmp\dir1 C:\tmp\dir2>sqlplus -L system@db SQL*Plus: Release 126.96.36.199.0 Production on Mon Nov 25 16:01:15 2013 Copyright (c) 1982, 2013, Oracle. All rights reserved. Enter password: Connected to: Oracle Database 11g Enterprise Edition Release 188.8.131.52.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from global_name; GLOBAL_NAME ------------------------------------------------- DB02.EXAMPLE.ORG
SQLPLUS.EXE is using the sqlnet.ora in the current directory (EXAMPLE.ORG) and the tnsnames.ora in the current directory (db02)
This does not reproduce on Linux
If you redirect a not-completly-string output (like a spfile) to a file in powershell, you may not see the same in the file as in the output
- without redirection
PS C:\> Select-String "compatible" .\spfileDB01.ora spfileDB01.ora:13:*.compatible='184.108.40.206.0'
- with redirection
PS> Select-String "compatible" .\spfileDB01.ora > compatible.txt PS> vim -b .\compatible.txt ÿþ^M^@ ^@s^@p^@f^@i^@l^@e^@D^@B^@0^@0^@1^@.^@o^@r^@a^@:^@1^@3^@:^@*^@.^@c^@o^@m^@p^@a^@t^@i^@b^@l^@e^@=^@'^@1^@1^@.^@2^@.^@0^@.^@4^@.^@0^@'^@^M^@ ^@
- With redirection and conversion to ascii
PS> Select-String "compatible" .\spfileDB01.ora | Out-File -Encoding ASCII .\compatible.txt PS> vim .\compatible.txt spfileDB01.ora:13:*.compatible='220.127.116.11.0'
With Out-File (instead of >), you can specify the encoding, which does the trick
As cloud technologies continue to advance, entrepreneurs are becoming more capable of utilizing these services to launch new enterprises. Because the cloud provides business owners with increased flexibility, managing information according to the specific needs of the company has become more feasible. Additionally, the scalable storage features of cloud-based infrastructures make saving capital expenses easier for burgeoning enterprises.
Simplifying the tech process for new businesses
According to Enterprise Irregulars, the cloud's unique memory capabilities are particularly important for small companies. While new businesses are in the startup phase, it can be difficult to determine how much storage is necessary until after an adjustment period has occurred. Additionally, because decision-makers are typically responsible for handling IT, it is also crucial that simple digital architectures are deployed that can be easily manipulated.
As cloud services provide all of these functions, the source noted that small business owners should consider leveraging one of these solutions first. A cloud-enhanced infrastructure can also reach a wider audience than legacy strategies. Now that the cloud has become so widely used across a range of industries, such as retail, academia and law, individuals are often already familiar with how the technology works when they reach their new positions. For this reason, the source reported that companies utilizing the cloud have a distinct advantage over organizations that do not and, as such, will require additional employee training.
According to Cloud Tweaks, unique applications deployed across a digital architecture can also provide young enterprises with better cost savings. Database administration services, for instance, can be attached to an organization's information to provide decision-makers with more security and tech support. For small companies that are just starting out, this can be a way to relieve owners from having to focus on IT .
As decision-makers begin the process of building their new enterprises, it's important that cost-effectiveness remains a vital component. By considering the above-mentioned strategies, companies can begin acting within their industries efficiently and with a plan to save on capital.
RDX offers a full suite of cloud migration and administrative services that can be tailored to meet any customer's needs. To learn more about our full suite of cloud migration and support services, please visit our Cloud DBA Service page or contact us.
Just a quick note to say the top-level approval for OTNYathra 2014 – India OTN Tour has been granted. Next, we’ve got to submit our individual travel approvals and see how that goes. If everything goes to plan, I will be representing the Oracle ACE Program on this tour in February, which visits the following locations.
- Jallandar – 18th February
- Delhi - 20th February
- Mumbai - 22nd February
- Pune - 23rd February
- Hyderabad - 25th February
- Bangalore - 27th February
- Chennai – 1st March
Fingers crossed everything goes to plan.
Tim…OTNYathra 2014 : India OTN Tour was first posted on November 25, 2013 at 9:52 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Generalizing about SaaS (Software as a Service) is hard. To prune some of the confusion, let’s start by noting:
- SaaS has been around for over half a century, and at times has been the dominant mode of application delivery.
- The term multi-tenancy is being used in several different ways.
- Multi-tenancy, in the purest sense, is inessential to SaaS. It’s simply an implementation choice that has certain benefits for the SaaS provider. And by the way, …
- … salesforce.com, the chief proponent of the theory that true multi-tenancy is the hallmark of true SaaS, abandoned that position this week.
- Internet-based services are commonly, if you squint a little, SaaS. Examples include but are hardly limited to Google, Twitter, Dropbox, Intuit, Amazon Web Services, and the company that hosts this blog (KnownHost).
- Some of the core arguments for SaaS’ rise, namely the various efficiencies of data center outsourcing and scale, apply equally to the public cloud, to SaaS, and to AEaaS (Anything Else as a Service).
- These benefits are particularly strong for inherently networked use cases. For example, you really don’t want to be hosting your website yourself. And salesforce.com got its start supporting salespeople who worked out of remote offices.
- In theory and occasionally in practice, certain SaaS benefits, namely the outsourcing of software maintenance and updates, could be enjoyed on-premises as well. Whether I think that could be a bigger deal going forward will be explored in future posts.
For smaller enterprises, the core outsourcing argument is compelling. How small? Well:
- What’s the minimum level of IT operations headcount needed for mission-critical systems? Let’s just say “several”.
- What does that cost? Fully burdened, somewhere in the six figures.
- What fraction of the IT budget should such headcount be? As low a double digit percentage as possible.
- What fraction of revenues should be spent on IT? Some single-digit percentage.
So except for special cases, an enterprise with less than $100 million or so in revenue may have trouble affording on-site data processing, at least at a mission-critical level of robustness. It may well be better to use NetSuite or something like that, assuming needed features are available in SaaS form.*
*Truth be told, I’m not up to speed on mid-range SaaS application suite alternatives.
Continuing that thought — if you’re a mid-range application software provider, you have to develop a SaaS version of your product line. That’s a very different business model than the apps + OEMed platform you’re probably providing now, but it’s the best way to serve your customers going forward. And by the way — while mid-range application software is commonly sold on a regional basis, SaaS can be sold more globally; after all, the the need for onsite service is eliminated, and price points should in most cases fit with telephone sales. Yes, national language and regional data privacy rules are both concerns, but they still leave the available markets looking much bigger than regional resellers have traditionally enjoyed. So expect shake-outs in a whole lot of vertical markets, as vendors horn in on each other’s territories, and a few elephantine winners perhaps emerge.
The argument above assumes that extreme reliability is needed. So there’s nothing necessarily wrong with a small team of business analysts sticking an RDBMS appliance* in a corner and managing it themselves. If it sputters from time to time, who cares; using it still may be easier than getting that data in and out of the cloud. But eventually, if all the data is remote anyway — SaaS, website, etc. — then it may make sense to do analytics remotely as well.
*Previously, that appliance might have been from Netezza; now, my first thought is the cheaper — albeit more limited — Infobright.
The arguments that direct smaller companies toward SaaS apply to large enterprises to, but they aren’t as dispositive. Larger enterprises can actually afford to do their own IT operations if they want to. What’s more, moving away from in-house operations is harder for big firms, due to the larger and more customized portfolio of legacy systems they’re likely to have. That said:
- Almost all enterprises should have their internet-facing systems offsite, even if just via co-location. The core reasons are that ingesting high-volume inbound network traffic is inherently difficult, and security issues make it much tougher yet. In addressing these challenges, specialists enjoy significant economies of scale.
- Most enterprises will have plenty of SaaS silos. If nothing else:
- Complex machinery will increasingly “phone home” for help staying in good working order. That’s a form of SaaS.
- Information providers and aggregators tend to deliver via SaaS.
- Various kinds of collaboration and communication apps, from Google Mail to Dropbox, live in the cloud. Personal productivity applications, from word processing to Photoshop, may be following.
- “Rodney Dangerfield” departments — i.e., ones unhappy with the respect and attention they get from central IT — often turn to SaaS or similar outsourcing. Human resources is an obvious example, from Automatic Data Processing to Employease to, these days, Workday.
That leaves us with the questions as to when and how large enterprises should or will move their core applications to SaaS and/or the cloud. Given the length of this post, I won’t try to answer them now. But for starters:
- Enterprises don’t like to rip and replace their apps, except in consolidation projects, as long as they can avoid doing so.
- Cloud/remote computing economies are less convincing if you already have your computer rooms staffed and set up.
- A key benefit of SaaS is that vendors control and drive the upgrade cycles. One cost of that is restrictions on customization, although you can also build apps and app extensions on Paas//DBaaS/Waas (Platform/DataBase/Whatever as a Service) offerings such as force.com.
- Lock-in is a serious concern, for application and platform offerings alike. Not only are you betting on one vendor’s software black box, you’re also betting on its remote computing operation. If you grow dissatisfied with either, or with their pricing, you may not have much opportunity to escape.
If you’re a user of Oracle’s data integration products, you’re probably aware that ODI12c came out last month, with some of the new features covered in posts on the blog here, here and here. Rittman Mead were actually on the beta program for 12c, with several of our team attending preview events in the UK and USA at the start of the year, and then running preview releases back in our development labs as the product neared GA.
As CTO I sponged our involvement in the ODI12c beta program, and was invited down to Oracle’s offices in Reading to take part in an interview with Kulvinder Hari, Director Product Management – Fusion Middleware, to talk about our involvement in the program, and what we saw as the most interesting new features in this latest release. You can access the full video here, and I also talk about the wider data integration market, products such as GoldenGate and EDQ, and the benefits OWB customers will get when migrating to, and interoperating with, Oracle Data Integrator.
The interview was actually part of a wider set of activities around the ODI12c launch, and you read a recap of the Oracle GoldenGate 12c and Oracle Data Integration 12c Webcast on the Oracle website, as well as download resources on ODI12c from an oracle.com microsite. Keep an eye on the blog as well over the next few months, as we post more content on ODI12c along with the other new Oracle Data Integration 12c releases.odi
“Everything should be made as simple as possible, but not simpler.” – Albert Einstein
I am curious as to why anyone would use ASM for a standalone database as it introduce more complexity of having to install, maintain, and upgrade Grid Infrastructure.
I asked the DBAs and here are the some responses:
ASM always bypass the OS caching layer (direct IO) which can be difficult and inconsistent using traditional FS.
ASM allows move to another type of storage in the future.
ASM provides transparent balance and migration between any type of block storage.
ASM provides online operations like adding/removing/resizing disks/LUNs from one disk array to another.
I asked a former manager who knows EMC storage and Solaris and here are his responses:
1. OS Caching: this was solved a long time ago. As you know, even plain old UFS on Solaris has been able to do this for years with Oracle.
2. Moving to another type of storage. That’s a function of either the volume manager on the OS or the array itself (both are possible and depend on your setup).
Oracle isn’t bringing anything new to the table with ASM.
3. Balance and migration. Again, a function of the volume manager and/or the array. Nothing new being done by ASM.
4. Online operations and migrations. Again, a function of the volume manager and/or the array. ASM brings nothing new.
It seems to me that ASM does, really, one thing: it eliminates the OS volume manager but at the same time expands the scope of the DBA to now include storage problems. That sounds like a distraction to me.
I would love to see a live debate on this.
Still not 100% satisfied, I started to research and explore ASM online operations.ASM disks VENDOR101,VENDOR102,VENDOR103 are mapped to /dev/sdb1,/dev/sdc1,/dev/sdd1
[root@looney ~]# ls -l /dev/sd* brw-rw---- 1 root disk 8, 0 Nov 23 09:23 /dev/sda brw-rw---- 1 root disk 8, 1 Nov 23 09:23 /dev/sda1 brw-rw---- 1 root disk 8, 2 Nov 23 09:23 /dev/sda2 brw-rw---- 1 root disk 8, 16 Nov 23 09:23 /dev/sdb brw-rw---- 1 root disk 8, 17 Nov 23 09:25 /dev/sdb1 brw-rw---- 1 root disk 8, 32 Nov 23 09:23 /dev/sdc brw-rw---- 1 root disk 8, 33 Nov 23 09:27 /dev/sdc1 brw-rw---- 1 root disk 8, 48 Nov 23 09:23 /dev/sdd brw-rw---- 1 root disk 8, 49 Nov 23 09:27 /dev/sdd1 brw-rw---- 1 root disk 8, 64 Nov 23 09:23 /dev/sde brw-rw---- 1 root disk 8, 65 Nov 23 09:23 /dev/sde1 brw-rw---- 1 root disk 8, 80 Nov 23 09:23 /dev/sdf brw-rw---- 1 root disk 8, 81 Nov 23 09:23 /dev/sdf1 brw-rw---- 1 root disk 8, 96 Nov 23 09:23 /dev/sdg brw-rw---- 1 root disk 8, 97 Nov 23 09:23 /dev/sdg1 [root@looney ~]# ls -l /dev/oracleasm/disks/ total 0 brw-rw---- 1 oracle dba 8, 17 Nov 23 09:25 VENDOR101 brw-rw---- 1 oracle dba 8, 33 Nov 23 09:27 VENDOR102 brw-rw---- 1 oracle dba 8, 49 Nov 23 09:27 VENDOR103 [root@looney ~]# /etc/init.d/oracleasm listdisks VENDOR101 VENDOR102 VENDOR103 [root@looney ~]# /etc/init.d/oracleasm querydisk -p VENDOR101 Disk "VENDOR101" is a valid ASM disk /dev/sdb1: LABEL="VENDOR101" TYPE="oracleasm" [root@looney ~]# /etc/init.d/oracleasm querydisk -p VENDOR102 Disk "VENDOR102" is a valid ASM disk /dev/sdc1: LABEL="VENDOR102" TYPE="oracleasm" [root@looney ~]# /etc/init.d/oracleasm querydisk -p VENDOR103 Disk "VENDOR103" is a valid ASM disk /dev/sdd1: LABEL="VENDOR103" TYPE="oracleasm"Create ASM disks for VENDOR201,VENDOR202,VENDOR203 using /dev/sde1,/dev/sdf1,/dev/sdg1
[root@looney ~]# /etc/init.d/oracleasm createdisk 'vendor201' /dev/sde1 Marking disk "vendor201" as an ASM disk: [ OK ] [root@looney ~]# /etc/init.d/oracleasm createdisk 'vendor202' /dev/sdf1 Marking disk "vendor202" as an ASM disk: [ OK ] [root@looney ~]# /etc/init.d/oracleasm createdisk 'vendor203' /dev/sdg1 Marking disk "vendor203" as an ASM disk: [ OK ] [root@looney ~]#Perform ASM ONLINE Migration
Query ASM disks:
SYS@+ASM> @asmdisk INST_ID PATH DISK_NAME GROUP_NUMBER GROUP_NAME STATE TYPE TOTAL_GB FREE_GB ---------- -------------------- --------------- ------------ --------------- ---------- ---------- ---------- ---------- 1 ORCL:VENDOR101 VENDOR101 1 DATA1 MOUNTED EXTERN 14.98 13.27 1 ORCL:VENDOR102 VENDOR102 1 DATA1 MOUNTED EXTERN 14.98 13.27 1 ORCL:VENDOR103 VENDOR103 1 DATA1 MOUNTED EXTERN 14.98 13.27 1 ORCL:VENDOR201 1 ORCL:VENDOR202 1 ORCL:VENDOR203 6 rows selected.
Add new disks / Drop old disks from ASM diskgroup:
ASM power limit ranges from 1-11.
In the demonstration, power limit 1 was used to be able to query progress from v$asm_operation
SYS@+ASM> alter diskgroup DATA1 add disk 'ORCL:VENDOR201','ORCL:VENDOR202','ORCL:VENDOR203' drop disk 'VENDOR101','VENDOR102','VENDOR103' rebalance power 1; 2 3 4 Diskgroup altered. SYS@+ASM> SYS@+ASM> @asmop GROUP_NUMBER OPERA STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES ERROR_CODE ------------ ----- ---------- ---------- ---------- ---------- ---------- ---------- ----------- -------------------------------------------- 1 REBAL RUN 1 1 58 1756 60 28 SYS@+ASM> r 1* select * from v$asm_operation GROUP_NUMBER OPERA STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES ERROR_CODE ------------ ----- ---------- ---------- ---------- ---------- ---------- ---------- ----------- -------------------------------------------- 1 REBAL RUN 1 1 178 1758 677 2 SYS@+ASM> r 1* select * from v$asm_operation GROUP_NUMBER OPERA STATE POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES ERROR_CODE ------------ ----- ---------- ---------- ---------- ---------- ---------- ---------- ----------- -------------------------------------------- 1 REBAL RUN 1 1 298 1759 697 2 SYS@+ASM> r 1* select * from v$asm_operation no rows selected
SYS@+ASM> @asmdisk INST_ID PATH DISK_NAME GROUP_NUMBER GROUP_NAME STATE TYPE TOTAL_GB FREE_GB ---------- -------------------- --------------- ------------ --------------- ---------- ---------- ---------- ---------- 1 ORCL:VENDOR101 1 ORCL:VENDOR102 1 ORCL:VENDOR103 1 ORCL:VENDOR201 VENDOR201 1 DATA1 MOUNTED EXTERN 14.98 13.27 1 ORCL:VENDOR202 VENDOR202 1 DATA1 MOUNTED EXTERN 14.98 13.27 1 ORCL:VENDOR203 VENDOR203 1 DATA1 MOUNTED EXTERN 14.98 13.27 6 rows selected. SYS@+ASM>Delete ASM disks for /dev/sdb1, dev/sdd1, /dev/sdd1
[root@looney ~]# /etc/init.d/oracleasm deletedisk /dev/sdb1 Removing ASM disk "/dev/sdb1": [ OK ] [root@looney ~]# /etc/init.d/oracleasm deletedisk /dev/sdc1 Removing ASM disk "/dev/sdc1": [ OK ] [root@looney ~]# /etc/init.d/oracleasm deletedisk /dev/sdd1 Removing ASM disk "/dev/sdd1": [ OK ] [root@looney ~]# /etc/init.d/oracleasm scandisks Scanning the system for Oracle ASMLib disks: [ OK ] [root@looney ~]# /etc/init.d/oracleasm listdisks VENDOR201 VENDOR202 VENDOR203 SYS@+ASM> @asmdisk INST_ID PATH DISK_NAME GROUP_NUMBER GROUP_NAME STATE TYPE TOTAL_GB FREE_GB ---------- -------------------- --------------- ------------ --------------- ---------- ---------- ---------- ---------- 1 ORCL:VENDOR201 VENDOR201 1 DATA1 MOUNTED EXTERN 14.98 13.27 1 ORCL:VENDOR202 VENDOR202 1 DATA1 MOUNTED EXTERN 14.98 13.27 1 ORCL:VENDOR203 VENDOR203 1 DATA1 MOUNTED EXTERN 14.98 13.27 SYS@+ASM>
Would you used ASM for single instance database? Why or Why not?
Now, 12c has a report.
UPDATE : It seems that the DBMS_STATS.REPORT_COL_USAGE function has been available in some 11.2.0.x patchset release. Although it doesn't appear in the 18.104.22.168 documentation set that I had downloaded it is now visible in the online documentation set updated to 22.214.171.124
First, I run some candidate queries :
SQL> show user
USER is "HEMANT"
SQL> select owner, count(*) from obj_list_2
2 where owner like 'SYS%'
3 group by owner;
SQL> select owner, count(*) from obj_list_2
2 where owner like 'HEM%'
3 group by owner;
Then, I check for COL_USAGE :
SQL> variable mycolusagerept clob;
SQL> set long 10000000
2 :mycolusagerept := dbms_stats.report_col_usage(
PL/SQL procedure successfully completed.
SQL> print mycolusagerept;
EQ : Used in single table EQuality predicate
RANGE : Used in single table RANGE predicate
LIKE : Used in single table LIKE predicate
NULL : Used in single table is (not) NULL predicate
EQ_JOIN : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER : Used in single table FILTER predicate
JOIN : Used in JOIN predicate
GROUP_BY : Used in GROUP BY expression
COLUMN USAGE REPORT FOR HEMANT.OBJ_LIST_2
1. OWNER : LIKE
The report indicates that the OWNER column has been used as a LIKE predicate.
Managing & Troubleshooting Cluster - 360 degrees. Syed Jaffer Hussain
Indexes Usage in database : Tips and Tricks. Phani Arega
Oracle 12c Clusterware upgrade - Best Practices. Syed Jaffer Hussain
DBA to Data Scientist with Oracle Big Data. Satendra Kumar
Deploy Oracle RAC 12c in minutes. P S Janakiram
Big Data for Oracle Professionals. Arup Nanda
It’s coming up to the time when I have to think about which presentations to go to at UKOUG Tech13 – always difficult to decide whether to see topics I’m familiar with to find out how much I didn’t know, or whether to see topics which I don’t know to get some sort of intelligent briefing. Here’s my starting thought:
12:30 Me, on compression (index, basic and OLTP – not HCC)
13:40 Tony Hasler: “Why does the optimizer sometimes get the plan wrong”
15:00 Kyle Hailey: “Oracle transaction locks and analysis”
16:00 Neil Chandler: “10046 trace – powerful, or pointless in the real world”
9:00 Me (at Oak Table World), “Calculating Selectivity”
10:15 Jason Arneil: “Exadata Consolidations and Migrations”
11:30 Tom Kyte: “Optimizer boot camp – 10 optimizer tips you can’t miss”.
12:40 Kyle Hailey: “Virtualizing data for databases”
14:10 Chris Dunscombe: “Keeping the auditors happy – is it really possible”
15:25 David Kurtz: “Partition, archive, compress, purge – Keep your ERP on the road”
16:35 Christo Kutrovsky: “Maximize data warehouse performance with parallel queries”
9:00 Joze Senegacnik: “Query Transformation Revisited”
10:15 Larry Carpentar: “Active Data Guard – next generation data protection”
11:20 Dev Nayak et. al. OEM Round table
12:55 Me: “Beginners’ Guide to CBO” — NOTE this is for Beginners!
13:55 Tony Hasler et. al. Optimizer Round table (starts 13:40)
15:40 John King: “12 New features for Developers”
17:45 Neil Chandler: “Why did my plan change” (Engineered systems stream)
8:30 – there are two masterclasses worth attending, but they overlap my presentation.
8:30 Lisa Dobson “Rman Duplicate – when clones attack”
9:25 Bertrand Drouvot “Examples of R usage with Oracle”
10:05 Me “The Evolution of Histograms” (AKA the Histogram Revolution in 12c)
12:25 Me, with Tony Hasler: “Does Oracle ignore hints”
13:55 Uwe Hesse: “Materialized Views and Partition Change Tracking”
Make sure to download sample application, to test this use case - DynamicMandatoryAttrApp.zip. Test is fairly simple, there is special column displaying true/false values - meaning if CommissionPct attribute is set to be required or no in the current row. CommissionPct attribute is set to be mandatory, if Salary attribute value is greater than 10000:
You should see for Employee ID = 108, CommissionPct attribute is set to be mandatory. Try to set Salary attribute value for this employee to be less than 10000, CommissionPct attribute becomes non-mandatory instantly:
We are going to check now, how such functionality is implemented in ADF BC. There is custom Entity Attributes Hints class registered in EO implementation class:
Through this custom class we can intercept all calls to ADF BC to retrieve hints for given attribute. Mandatory property is one of the hints, this is how we can return true or false conditionally.
If you are running ADF version starting from 11g R2 or 12c, there is internal issue - it requires to set attribute index explicitly through getAttributeHints(name) method. Otherwise, custom class for Entity Attribute Hints - will be ignored:
In the custom class for Entity Attribute Hints, we can override standard method - getHint(locale, name). In this method we could evaluate conditionally or mandatory checks, as in this example CommissionPct attribute is marked to be mandatory, only if Salary value is greater than 10000:
Initially, CommissionPct attribute is set to be non-mandatory:
There are several changes required on ADF UI level, to make it work correctly. I would recommend to set contentDelivery=immediate and immediate=true for the ADF table component, this allows to reduce number of validation error popups:
When Salary attribute changes, we need to refresh dependent CommissionPct attribute, however it would not work to use regular PartialTrigger dependency and bypass validation error. We need to create binding reference and call partial target refresh programmatically. Read more here - Conditionally Required Fields in Oracle ADF Faces Rich Client. Similar binding is created for Panel Collection:
Salary attribute is set with AutoSubmit=true and Value Change Listener (we refresh table component):
Refresh is done from table component:
Make sure to change ADF UI mandatory expression, to use row.bindings.CommissionPct.hints.mandatory. Otherwise, with default expression generated for ADF table column attribute by JDeveloper, ADF UI fails to trigger getHint method in ADF BC for mandatory check:
Allow me to offer you the description in full size, easy to read text (bolding by Yours Truly):
My Cloud by Western Digital
3TB Personal Cloud Storage
* Keep Your Content Safe at Home
* Get Abundant Storage and Blazing-Fast Performance Without Paying Monthly Fees
* Store, Organize and Back UP Your Photos, Videos, Music and Important Documents All in One Place
Makes sense to me. Western Digital is watching "the Cloud" (Internet-based storage and computing resources) take away its business ("I hate you Dropbox").
Rather than fight the buzz about Cloud Computing, WD "joins" Cloud Computing with a "Personal Cloud."
Also known as: your own hard drive.
In yesterday’s post on running OBIEE in the cloud, I looked at a number of options for hosting the actual OBIEE element; hosting it in a public cloud service such as Amazon EC2, using Oracle’s upcoming BI-as-a-Service offering, or partner offerings such as our upcoming ExtremeBI in the Cloud service. But the more you think about this sort of thing, the more you realise that the OBIEE element is actually the easy part – it’s what you do about data storage, security, LDAP directories and ETL that makes things more complicated.
Take the example I gave yesterday where OBIEE was run in the cloud, with the multi-tenancy option enabled, the main data warehouse in the cloud, and data sourced from cloud and on-premise sources.
In this type of setup, there’s a number of things you need to consider beyond how OBIEE is hosted. For example:
- If your corporate LDAP directory is on-premise, how do we link OBIEE to it? Or does the LDAP server also need to be in the cloud?
- What sort of database do we use if we’re hosting it in the cloud. Oracle? If so, self-hosted in a public cloud, or through one of the Oracle DB-in-the-cloud offerings?
- If not Oracle database, what other options are available?
- And how do we ETL data into the cloud-based data warehouse? Do we continue to use a tool like ODI, or use a cloud-based option – or even a service such as Amazon’s AWS Data Pipeline?
What complicates things at this stage in the development of “cloud”, is that most companies won’t move 100% to cloud in one go; more likely, individual application and systems might migrate to the cloud, but for a long time we’ll be left with a “hybrid” architecture where some infrastructure stays on premise, some might sit in a public cloud, others might be hosted on third-party private clouds. So again, what are the options?
Well Oracle’s upcoming BI-as-a-service offering works at one extreme end-of-the-spectrum; the only data source it’ll initially work with is Oracle’s own database-as-a-service, which in its initial incarnation provides a single schema, with no SQL*Net access and with data instead uploaded via a web interface (this may well change when Oracle launch their database instance-as-a-service later in 2014). No SQL*Net access means no ETL tool access though, in practice, as they all use SQL*Net or ODBC to connect to the database, so this offer to my mind is aimed at either (a) small BI applications where it’s practical to upload the data via Excel files etc, or (b) wider Oracle Cloud-based systems that might use database-as-a-service to hold their data, Java-as-a-service for the application and so forth. What this service does promise though is new capabilities within OBIEE where users can upload their own data, again via spreadsheets, to the cloud OBIEE system, and have that “mashed-up” with the existing corporate data – the aim being to avoid data being downloaded into Excel to do this type of work, and with user metrics clearly marked in the catalog so they’re distinct from the corporate ones.
But assuming you’re not going for the Oracle cloud offer, what are the other options over data? Well hosting OBIEE in the cloud is conceptually no different from hosting anywhere else, in that it can connect to various data sources via the various connection methods, so in-principle you’ve got just the same options open to you if running on premise. But the driver for moving OBIEE into the cloud might be that your applications, data etc are already in the cloud, and you might also be looking to take advantage of cloud features in your database such as dynamic provisioning and scaling, or indeed use one of the new cloud-native databases such as Amazon Redshift.
I covered alternative databases for use with OBIEE a few months ago in a blog post, and Amazon Redshift at the time looked like an interesting option; based on ParAccel, a mature analytic database offering, column-store and tightly integrated in with Amazon’s other offerings, a few customers have asked us about this as an option. And they’re certainly interesting – in practice, not all that different in pricing to Oracle database as a source but with some interesting analytic features – but they all suffer from the common issue that they’re not officially supported as data sources. Amazon Redshift, for example, uses Postgres-derviced ODBC drivers to connect to it, but Postgres itself isn’t officially supported as a source, which means you could well get sub-optimal queries and you certainly won’t get specific support from Oracle for that source. But if it works for you – then this could be an option, along with more left-field data source such as Hadoop.
But to my mind, it’s the ETL element that’s the most interesting, and most challenging, part of the equation. Going back to Openworld, Oracle made a few mentions of ETL in their general Cloud Analytics talks, including talk about an upcoming data source adapter for the BI Apps that’ll enable loading from Fusion Apps in the cloud, like this:
There were also a number of other deployment options discussed, including hybrid architectures where some sources were in the cloud, some on-premise, but all involved running the ETL elements of the BI Apps – ODI or Informatica – on-premise, the same way as installs today. And to my mind, this is where the Oracle cloud offering is the weakest, around cloud-based and cloud-native ETL and data integration – the only real option at the moment is to run ODI agents in the cloud and connect them back to an on-premise ODI install, or move the whole thing into the cloud in what could be quiet a heavyweight data integration architecture.
Other vendors are, in my opinion, quite a way further forward with their cloud data integration tools strategy than Oracle, who instead seem to be still focused on on-premise (mainly), database-to-database (mostly) ETL. To take two examples; Informatica have an Informatica Cloud service which appears to be a platform-as-a-service play, with customers presumably signing-up for the service, designing their ETL flows and then paying for what they use, with a focus on cloud APIs as well as database connectivity, and full services around data quality, MDM and so forth.
Another vendor in this space is SnapLogic, a pure-play cloud ETL vendor selling a component-based product with a big focus on cloud, application and big data sources. What’s interesting about this and other similar vendor’s approaches though are they they appear to be “cloud-first” – written for the cloud, sold as a service, as much focused on APIs as database connectivity – a contrast to Oracle’s current data integration tools strategy which to my mind still assumes an on-premise architecture. What’s more concerning is the lack of any announcement around ETL-in-the-cloud at the last Openworld – if you look at the platform-as-a-service products announced at the event, whilst database, BI, documents, BPM and so forth-as-a-service were announced, there was no mention of data integration:
What I’d like to see added to this platform in terms of data integration would be something like:
- On-demand data integration, sold as a service, available as a package along with database, Java and BI
- Support for Oracle and non-Oracle application APIs – for example Salesforce.com, Workday and SAP – see for example what SnapLogic support in this area.
- No need for an install – it’s already installed and it’s a shared platform, as they’re doing with OBIEE
- Good support for big data, unstructured and social data sources
I think it’s pretty likely this will happen – whilst products such as the BI Apps can have their ETL in the cloud, via ODI in the BI Apps 11g version for example, these are inherently single-tenant, and I’d fully expect Oracle plan at some time to offer BI Apps-as-a-service, with a corresponding data integration element designed from the ground-up to work cloud-native and integrate with the rest of Oracle’s platform-as-a-service offering.
So there we have it – some thoughts on the database and ETL elements in an OBIEE-in-the-cloud offering. Keep an eye on the blog over the next few months as I built-out a few examples, and I’ll be presenting on the topic at the upcoming BIWA 2014 event in San Francisco in January – watch this space as they say.
Douglas Belkin wrote an article yesterday in the Wall Street Journal based on a study from Moody’s Investors Service. The lede of the article is that “nearly half of the nation’s colleges and universities are no longer generating enough tuition revenue to keep pace with inflation”, which comes from Moody’s interest in institutional financial stability, but I think there are other lessons available. While the revolution in college pricing effects is quiet, it is profound.
It is worth noting that the big changes are based on FY2013 and 2014, which includes survey-based estimates and projections rather than hard data. The article is behind a paywall, but here are some relevant excerpts (read the whole article if you can).
Nearly half of the nation’s colleges and universities are no longer generating enough tuition revenue to keep pace with inflation, highlighting the acceleration of a downward spiral that began as the recession ended, according to a new survey by Moody’s Investors Service.
Technically speaking, the US recession ended in June 2009, so the argument that the downward spiral began at that point does not match the data. The downward spiral started in the 2012 -13 school year, fully three years later. There are other correlations that might be more relevant such as student loans and defaults, acceptance that job prospects are not rebounding, etc.
The survey of nearly 300 schools reflects a cycle of disinvestment and falling enrollment that places a growing number at risk. While schools for two decades were seeing rising enrollments and routine increases of 5% to 8% in net tuition, many now are facing grimmer prospects: a shrinking pool of high-school graduates, depressed family incomes and precarious job prospects after college.
These are all good points, but I would extend this argument to say that families are now becoming value shoppers for college certificates and degrees. While there is still strong sentiment that the investment in college will pay off over time, families and students want to minimize the investment amount (and risk).
The softening demand for four-year degrees is prompting schools to rein in tuition increases while increasing scholarships. Those moves are cutting into net tuition revenue—the amount of money a university collects from tuition, minus school aid.
For 44% of public and 42% of private universities included in the survey, net tuition revenue is projected to grow less than the nation’s roughly 2% inflation rate this fiscal year, which for most schools ends in June. Net tuition revenue will fall for 28% of public and 19% of private schools.
What you can see from the data is not stasis followed by an accelerating drop-off; what can be seen is a reversal from tuition revenue increases far above to far below inflation rates. 5% to 8% in net tuition revenue was not sustainable, and it is likely one of the major causes of the dramatic changes over the past few years.
As Herbert Stein taught us, Trends that can’t continue, won’t.
Keep in mind that we’re talking net tuition revenue, which subtracts out school aid but includes federal financial aid. The change in school revenue cannot be explained by a drop in federal financial aid, however.
This drop in net tuition revenue cannot be explained a drop in state spending on public institutions – that is another category.
“We don’t know where the bottom is; if we knew, we could structure appropriately,” said [U Louisiana President] Mr. Bruno, with regard to the budget cuts. The result: “We have to look at a different business model; we can’t just depend on our region anymore.”
The context of this comment is higher tuition for out-of-state students, but that is a losing strategy in my opinion. He does have a point with “we have to look at a different business model”.
Schools with the strongest brands are less vulnerable to these trends. For instance, as the international market consolidates, flagship state schools with strong reputations already established in foreign countries stand to benefit from their alumni networks. Midtier schools lacking a presence overseas will find it harder to break into new overseas markets.
This point is key, as it backs up two important observations on higher ed we have seen recently.
- The schools most at risk are those without strong name recognition. While that might be unfair, it seems to be a fact of life.
- While Clayton Christensen has taken some heat from the projection that “in 15 years from now half of US universities may be in bankruptcy”, the data from this survey lends some credibility to the concept.
As stated before, keep in mind that this is survey-based data, but it does provide insight into some important issues faced by US colleges and universities.
Just a quick note to say I’ve had to temporarily lock my forums. As happens every so often, the spammers are hitting it pretty hard at the moment. I’m a bit sick of spending every waking minute deleting hundreds of spam posts, so I’m giving myself a break for a while. During previous episodes, after a couple of days of no joy it’s died down, so hopefully it won’t last too long this time either…
Update: It’s late and I’m tired. I just remembered I can prevent new registrations, so I’ve unlocked the forums, but stopped all new registrations. That way, existing members can carry on as normal…Locked Forums was first posted on November 23, 2013 at 1:23 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
What do swaying palms, turquoise water, white sandy beaches and absolutely pristine fauna remind you of? Correct! It’s Log Buffer. This Log Buffer brings you beads of blog posts related to data dexterity crafted by leading bloggers across the planet.
When are Exadata’s storage indexes used?
Oracle 12c has increased the maximum length of character-based columns to 32K bytes.
Oracle has extended the maximum length of varchar2, nvarchar and raw columns to 32K, but this comes with some challenges when it comes to indexing such columns.
Martin has applying PSU 126.96.36.199.1 in the lab environment.
It is easier to create one or two AWR reports quickly using OEM. But what if you have to create AWR reports for several snapshots?
A demonstration of Power BI for Office 365, shows you how all the various tools and technologies work together.
Executing powershell script in a SQL Agent job – Host errors
Optimizing SQL Server Performance: Changing Your Settings
What Exactly Is This Sysadmin You Speak Of?
Automated Permissions Auditing With Powershell and T-SQL.
Here is a commentary on MySQL‘s slow query collection sources.
Integrating pt-online-schema-change with a Scripted Deployment
How to add VIPs to Percona XtraDB Cluster or MHA with Pacemaker.
The binary and source versions of MySQL Cluster 7.3.3 have now been made available.
Since MariaDB aims to be a compatible/drop-in replacement to MySQL, it’s crucial that in 10.0 it supports all the 5.6 options/system variables.