As the youngsters joyfully run towards the waiting Santa, just like that, the database enthusiasts devour the festive blog posts about their favorite database topics. This is what Log Buffer is picking up again and presenting to you.
12c Adaptive Optimization is a sweet song to the ears of performance fans.
Bobak is referencing LDAP for JDBC thin client connections.
Tim Hall is getting ready for Oracle license audit.
Be aware of these environment variables in .bashrc et al, Martin warns.
Oracle SQL Developer v4 is Live & Top 10 Reasons to Upgrade Today!
What a DBA wants for Christmas?
SQL Server 101: What Features, Commands and Datatypes Should be Generally Avoided.
SQL Admin Sample BACKUP/RESTORE Script.
Finding impersonation info in SQL Server.
There’s a scene in Pulp Fiction where Vincent opens a briefcase to look inside . What does he see?
PHP Memcache access to MySQL 5.7, faster? Redis?
Let’s say that you want to measure something in your database, and for that you need several operations to happen in parallel.
One more InnoDB gap lock to avoid.
Installing MySQL 5.7 DMR3 with the official yum repos.
FromDual.en: MySQL Environment MyEnv 1.0.2 has been released.
The admin interface has had quite a big redesign. I think it looks neater, but I’m sure it will take a bit of getting used to. The nice thing is it’s mobile aware now. If I run it on my Nexus 7 in landscape I get something similar to the browser view. If I switch to portrait it rearranges the screen to make it fit better. Neat.
The auto-updater (manually initiated) worked fine on 5 blogs, so not worries there.
WordPress 3.8 Released was first posted on December 13, 2013 at 12:54 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
So there I was, working on a project to duplicate a database from a volume copy at the storage level with the database being shutdown.
Sounds pretty simple right? Wrong! Put the database on ASM and it becomes complicated and convoluted.
Volume1 has ASM disk group in the fashion +DG1/db01/datafile, +DG1/db01/onlinelog +DG1/db01/tempfile
Volume1 will be copied to Volume2 at the storage level. What happens to all the ASM disk group?
The ASM disk group will need to be renamed using renamdg. Great! (with a little sarcasm)
So now the ASM disk groups will be +DG2/db01/datafile, +DG2/db01/onlinelog +DG2/db01/tempfile
What’s wrong with that picture?
Given I am an ASM noob, I did reach out to other and still waiting for a respond I can be confident with.
Now, if the database was not on ASM, I would be done with the project already.
1 hour to create a test case without using ASM, 8 hours too create action plan using ASM and still not completed.Create and Edit Control File SQL
SQL> alter database backup controlfile to trace as '/tmp/cf.sql';Parameter Copied from Source (DB01)
[oracle@arrow:db01]/u01/app/oracle/product/22.214.171.124/dbhome_1/dbs $ cat initdb02.ora *.audit_file_dest='/u01/app/oracle/admin/adump' *.audit_trail='none' *.compatible='126.96.36.199.0' *.db_block_size=8192 *.db_create_file_dest='/oradata' *.control_files='/oradata/DB02/controlfile/o1_mf_9bb5brjc_.ctl','/oradata/fra/DB02/controlfile/o1_mf_9bb5brxx_.ctl' *.db_domain='' *.db_name='db02' *.db_recovery_file_dest='/oradata/fra' *.db_recovery_file_dest_size=4g *.diagnostic_dest='/u01/app/oracle' *.event='10795 trace name context forever, level 2' *.fast_start_mttr_target=300 *.java_pool_size=0 *.local_listener='(ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1531))' *.pga_aggregate_target=268435456 *.processes=100 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=805306368 *.undo_tablespace='UNDOTBS' [oracle@arrow:db01]/u01/app/oracle/product/188.8.131.52/dbhome_1/dbsShutdown database from Source (DB01) and perform copy
$ cd /oradata/ [oracle@arrow:db01]/oradata $ ls * DB01: controlfile datafile onlinelog fra: DB01 [oracle@arrow:db01]/oradata $ cp -rp DB01/ DB02/ [oracle@arrow:db01]/oradata $ cd fra/ [oracle@arrow:db01]/oradata/fra $ cp -rp DB01/ DB02/Clone database from Source (DB01)
[oracle@arrow:db01]/oradata/fra $ db02 The Oracle base remains unchanged with value /u01/app/oracle IPC Resources for ORACLE_SID "db02" : Shared Memory ID KEY No shared memory segments used Semaphores: ID KEY No semaphore resources used Oracle Instance not alive for sid "db02" [oracle@arrow:db02]/oradata/fra $ sqlplus / as sysdba SQL*Plus: Release 184.108.40.206.0 Production on Thu Dec 12 17:04:23 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. SQL> set echo on SQL> @/tmp/cf.sql SQL> STARTUP NOMOUNT ORACLE instance started. Total System Global Area 801701888 bytes Fixed Size 2232640 bytes Variable Size 222301888 bytes Database Buffers 570425344 bytes Redo Buffers 6742016 bytes SQL> CREATE CONTROLFILE REUSE SET DATABASE "DB02" RESETLOGS ARCHIVELOG 2 MAXLOGFILES 16 3 MAXLOGMEMBERS 2 4 MAXDATAFILES 30 5 MAXINSTANCES 1 6 MAXLOGHISTORY 292 7 LOGFILE 8 GROUP 1 ( 9 '/oradata/DB02/onlinelog/o1_mf_1_9bb5bt5g_.log', 10 '/oradata/fra/DB02/onlinelog/o1_mf_1_9bb5btk6_.log' 11 ) SIZE 100M BLOCKSIZE 512, 12 GROUP 2 ( 13 '/oradata/DB02/onlinelog/o1_mf_2_9bb5btq3_.log', 14 '/oradata/fra/DB02/onlinelog/o1_mf_2_9bb5bv4g_.log' 15 ) SIZE 100M BLOCKSIZE 512, 16 GROUP 3 ( 17 '/oradata/DB02/onlinelog/o1_mf_3_9bb5bvc1_.log', 18 '/oradata/fra/DB02/onlinelog/o1_mf_3_9bb5cny5_.log' 19 ) SIZE 100M BLOCKSIZE 512 20 -- STANDBY LOGFILE 21 DATAFILE 22 '/oradata/DB02/datafile/o1_mf_system_9bb5cx2c_.dbf', 23 '/oradata/DB02/datafile/o1_mf_sysaux_9bb5ff55_.dbf', 24 '/oradata/DB02/datafile/o1_mf_undotbs_9bb5gsjs_.dbf', 25 '/oradata/DB02/datafile/o1_mf_users_9bb5j5wx_.dbf' 26 CHARACTER SET AL32UTF8 27 ; Control file created. SQL> ALTER DATABASE OPEN RESETLOGS; Database altered. SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/oradata/DB02/datafile/o1_mf_temp_9bb5j4rf_.tmp' 2 SIZE 268435456 REUSE AUTOEXTEND ON NEXT 268435456 MAXSIZE 8193M; Tablespace altered. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 220.127.116.11.0 - 64bit Production With the Partitioning optionCheck DBID for Cloned DB (Same as Source)
[oracle@arrow:db01]/oradata $ rman target / Recovery Manager: Release 18.104.22.168.0 - Production on Thu Dec 12 17:34:06 2013 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: DB01 (DBID=1464916248) RMAN> [oracle@arrow:db02]/oradata/fra $ rman target / Recovery Manager: Release 22.214.171.124.0 - Production on Thu Dec 12 17:05:44 2013 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: DB02 (DBID=1464916248) RMAN>Change DBID for cloned DB
[oracle@arrow:db02]/oradata/fra $ sqlplus / as sysdba SQL*Plus: Release 126.96.36.199.0 Production on Thu Dec 12 17:05:54 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 188.8.131.52.0 - 64bit Production With the Partitioning option SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 801701888 bytes Fixed Size 2232640 bytes Variable Size 222301888 bytes Database Buffers 570425344 bytes Redo Buffers 6742016 bytes Database mounted. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 184.108.40.206.0 - 64bit Production With the Partitioning option [oracle@arrow:db02]/oradata/fra $ nid target=sys dbname=DB02 logfile=/tmp/nid.log Password: [oracle@arrow:db02]/oradata/fra $ rman target / Recovery Manager: Release 220.127.116.11.0 - Production on Thu Dec 12 17:08:14 2013 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: DB02 (DBID=1464916248, not open) RMAN> exit Recovery Manager complete. [oracle@arrow:db02]/oradata/fra $ cat /tmp/nid.log DBNEWID: Release 18.104.22.168.0 - Production on Thu Dec 12 17:08:06 2013 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Connected to database DB02 (DBID=1464916248) Connected to server version 11.2.0 Control Files in database: /oradata/DB02/controlfile/o1_mf_9bb5brjc_.ctl /oradata/fra/DB02/controlfile/o1_mf_9bb5brxx_.ctl NID-00144: New name for database DB02 is the same as current name DB02 Change of database name and ID failed during validation - database is intact. DBNEWID - Completed with validation errors.Change DBID omitting DBNAME (changed from CF create)
[oracle@arrow:db02]/oradata/fra $ nid target=sys logfile=/tmp/nid.log Password: [oracle@arrow:db02]/oradata/fra $ cat /tmp/nid.log DBNEWID: Release 22.214.171.124.0 - Production on Thu Dec 12 17:10:58 2013 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Connected to database DB02 (DBID=1464916248) Connected to server version 11.2.0 Control Files in database: /oradata/DB02/controlfile/o1_mf_9bb5brjc_.ctl /oradata/fra/DB02/controlfile/o1_mf_9bb5brxx_.ctl Changing database ID from 1464916248 to 1581437733 Control File /oradata/DB02/controlfile/o1_mf_9bb5brjc_.ctl - modified Control File /oradata/fra/DB02/controlfile/o1_mf_9bb5brxx_.ctl - modified Datafile /oradata/DB02/datafile/o1_mf_system_9bb5cx2c_.db - dbid changed Datafile /oradata/DB02/datafile/o1_mf_sysaux_9bb5ff55_.db - dbid changed Datafile /oradata/DB02/datafile/o1_mf_undotbs_9bb5gsjs_.db - dbid changed Datafile /oradata/DB02/datafile/o1_mf_users_9bb5j5wx_.db - dbid changed Datafile /oradata/DB02/datafile/o1_mf_temp_9bb5j4rf_.tm - dbid changed Control File /oradata/DB02/controlfile/o1_mf_9bb5brjc_.ctl - dbid changed Control File /oradata/fra/DB02/controlfile/o1_mf_9bb5brxx_.ctl - dbid changed Instance shut down Database ID for database DB02 changed to 1581437733. All previous backups and archived redo logs for this database are unusable. Database is not aware of previous backups and archived logs in Recovery Area. Database has been shutdown, open database with RESETLOGS option. Succesfully changed database ID. DBNEWID - Completed succesfully.Check DBID for Cloned DB
[oracle@arrow:db02]/oradata/fra $ rman target / Recovery Manager: Release 126.96.36.199.0 - Production on Thu Dec 12 17:11:16 2013 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database (not started) RMAN> startup mount; Oracle instance started database mounted Total System Global Area 801701888 bytes Fixed Size 2232640 bytes Variable Size 222301888 bytes Database Buffers 570425344 bytes Redo Buffers 6742016 bytes RMAN> list incarnation; using target database control file instead of recovery catalog List of Database Incarnations DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time ------- ------- -------- ---------------- --- ---------- ---------- 1 1 DB02 1581437733 CURRENT 360639 12-DEC-2013 17:04:50 RMAN> alter database open; RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of alter db command at 12/12/2013 17:12:14 ORA-01589: must use RESETLOGS or NORESETLOGS option for database openOpen database resetlogs
RMAN> alter database open resetlogs; database opened RMAN> exit Recovery Manager complete. [oracle@arrow:db02]/oradata/fra $
Jonathan Lewis at his Oracle Scratchpad blog discusses Rowids.As always, valuable information and insight.
At Venzi's Tech-Blog a rundown of Oracle 12c background processes.
Forbes has good things to say about DBaaS: Why Database As A Service (DBaaS) Will Be The Breakaway Technology of 2014.
Some good material here on GoldenGate and data integration. A bit markety and salesish at the surface, but you can drill down to some good resources: Oracle Information InDepth Data Integration and Master Data Management Edition.
The Oracle Enterprise Manager blog brings us the Oracle Enterprise Manager Partner Plug-in News.
Do you want to Review Demantra Patches Released in Real Time? The Oracle Demantra blog tells you how.
From the Identity Management blog: The Technology Stack of Mobile Device Enablement - Simieo Solutions.
I don't think I've ever posted any links to The Groundside Blog by Duncan Mills. Looks like a good place. This article connects to a previous one, both on Click History - Access from Java.
SeachSOA sums up their views on Oracle products: How Fusion Middleware measures up for SOA integration.
A link to links from Proactive Support - Java Development using Oracle Tools: Top 10 solution documents for JDeveloper/ADF.
From InfoQ: Oracle Invites Community to Weigh-In on Java EE 8.
At ORCLville: Past Discoverer...And Beyond!
EPS changes in Analytics (and P6 Extended Schema), from the Oracle Primavera Analytics Blog.
From the Harvard Business Review comes this list of 10 Charts from 2013 That Changed the Way We Think. A few that are kind of...meh...a few that are genuinely interesting.
And on the negative side of artificial intelligence is this article from io9: Freakishly realistic telemarketing robots are denying they're robots. The good news: AI phonebots are approaching the point of passing the Turing test. The bad: AI phonebots are a menace and the do not call list the government maintains apparently is totally non-functional. That is at least judging from my lines that continue to be bombarded with bogus advertising calls after several years on the national do not call list.
Free courses – the app. There is an app available that connects you with a wide variety of online free courses. I just discovered it recently and can’t vouch for the overall results and quality, but there are certainly some interesting looking items in here: Coursera.
Another interesting development in the realm of access to tools is Scribd. It’s been around for many years, but they have added a subscription service that gives you access to thousands of books, along with all their huge collection of articles, for a flat 8.99 a month. They are trying to become the Netflix of books, and I think Amazon is going to sit up and take notice pretty quickly. Their collection of books is scales of magnitude smaller than Amazon, but you can read as much as you like for one price. Maybe Amaon should think of extending their Prime borrowing books program to allow you to have X number of books out at a time for Y amount of monthly payment. That way really voracious readers can keep a steady flow of volumes loaded up without having to buy them and Amazon will have a nice steady monthly subscription fee to use on building out their drone fleet.
We are considering some various options for Tuesday night networking event including the I.M. Pei designed Rock and Roll Half of Fame. Is that a good choice let us know!
Presentation stuf ... most of the usual sets of tracks/topics:
- Oracle Applications - R12, E-Business Suite, Hyperion Suite, PeopleSoft, HCM, Financials, including technical and functional/configuration topics
- DBA - Installation, upgrades, backup/recover, tuning, network connectivity, modeling
- Developer - PL/SQL, APEX, Java, .NET, ODI, design and application development topics
- Data Warehousing/BI - Hyperion, OBIEE, design philosophies, tools, case studies, migrations, big data
- Other - SOA, project management, security for database/applications/auditing, hardware, system networking performance
In a little-reported event the week of Thanksgiving, Desire2Learn let go 28 employees. The only public report I’m aware of comes from The Record out of Desire2Learn’s hometown of Kitchener, Ontario in Canada.
E-learning company Desire2Learn has cut about 25 workers from its product development department.
Virginia Jamieson, spokesperson for the Kitchener-based company, said nine per cent of the 280-member product development section was let go. That represents about three per cent of the firm’s total workforce. [snip]
Since it was founded in 1999, the company has grown to more than 900 [sic] employees in several countries.
“So it is a small percentage of that, but it happens to be people in Kitchener, which is where our initial growth was, so it may seem bigger than it really is,” Jamieson said.
While this move might be harsh for those employees affected, it does not sound too significant. But is there more to the story?
According to Sources
Michael and I have talked to 10 off-the-record sources and reviewed Twitter, LinkedIn and Glassdoor as we looked into this issue, and the consistent story we heard was that the layoffs may be much more significant, both in number and motivation. After research, we now believe:
- while 25 people in product development were let go, there were 28 people in total affected that week;
- a total of 56 people were let go in the past six months;
- product development is not the only group affected;
- the company now has ~750 employees, not ‘more than 900’; and
- 8 of the 10 sources indicated that the layoffs were related to the company not meeting sales growth targets.
Besides the cuts in product development, it appears that there have been quite a few people (~18) in marketing and several people in business development and project management who were also let go in the past six months.
What has me interested in this story is that Desire2Learn:
- Raised $80 million in August 2012 in the largest ever VC round for a Canadian company;
- Was profitable as of the funding round – the funds were not needed for operations at that company size;
- Has continued to grow in their core market (North American higher education), according to both Campus Computing and Edutechnica; and
- Seems to be growing in K-12, international and corporate markets.
Given this situation, why would Desire2Learn let go more than 7% of its workforce?
According to Desire2Learn
I asked Desire2Learn to comment on this story, and they provided the following response (at the time I had heard that ~100 people had been let go but now believe the number is 56).
Thanks for touching base and sharing the discussion from the field. To start, the sources stating that close to 100 people have been “laid off” over the past four months are simply not accurate. In fact, they are way off. There have been some incremental changes over the course of the year and the restructuring that occurred last week impacted 28 people.
The assumption that this reflects the company’s performance versus the recent investment is also incorrect. We’ve had a great year and the recent changes have nothing to do with the company’s performance – they have been strategic decisions to put the right structure in place to help Desire2Learn’s continued transformation into a global company.
This past spring, we brought in a new product team leader (Nick Oddson, who you met at FUSION) who has tremendous experience in growing global software companies. He restructured our R&D organization last week to align the department around our new markets and strategic directions. Other departments were reevaluated and reorganized earlier in the Fall.
As a result of these changes, Desire2Learn is in a great place for continued growth.
There is no story here other than the fact that D2L has set up a foundation to position itself for the next wave of growth. 2013 was an amazing year and we are looking to even more exciting things ahead in 2014!
I also talked to one of the lead investors, Jon Sakoda of NEA, for his perspective.
We are hiring very rapidly and have grown from ~500 people to ~750 people in less than 18 months. This is a lot of new people, and I think great companies always need to assess their talent and determine how to transition people who can’t be long term performers. [snip]
We added 140 people and churned 56 people since June 1. Forced churn is good and healthy when you are scaling. All of our companies do it – it’s a best practice.
Jon also pointed me to a blog post he had previously written on the subject of companies needing to ‘churn’ employees as they rapidly grow, which is consistent with his comments on Desire2Learn.
In a high growth company one of the hardest tests of leadership and loyalty is determining who can make the ascent and who will lag behind. Paradoxically, the bonds of friendship, camaraderie, and trust that make start-up teams strong in the early part of a company’s life become the hardest obstacles to overcome in making the tough decisions that set up companies to take on the challenges ahead. How can you lead your company through these transitions? Here are some best practices I’ve seen great leaders follow through the years: [snip]
Don’t Make “Churn” a Bad Word – in a scaling company, there is a relentless focus on hiring great talent to fill important roles. But it is equally important to assess overall quality, not just quantity, along the way and to be honest about hiring mistakes that are inevitable in a hyper growth environment. Make employee “churn” a metric that is measured every quarter, and don’t make “churn” a bad word.
What We Know
- 28 people were let go in November, 25 from product development;
- There have been additional rounds of people being let go since June, totaling 56 people;
- The company’s growth in the past year (in north american higher ed) appears to be in the ~6% range, from 11.1% to 11.8% from 2012 to 2013;
- The company is investing and most likely growing in K-12, international and corporate markets;
- The company has grown its workforce by 50% in the past 18 months, going from ~500 to ~750; and
- Since the Aug 2012 VC funding, the company has acquired three companies or platforms (Knowillage, Wiggio and Degree Compass) and opened four new offices (Boston, Melbourne, Sao Paolo, and Newfoundland) to join London and Singapore as their international offices.
What We Don’t Know
- How much has D2L grown in K-12, international, and non-education markets not measured by Campus Computing or Edutechnica;
- How much of the $80 million investment is still available for operations (SEC rules prevent the company or investors from commenting on financial matters); and
- Whether the end-of-January 2013 massive system outages or the problems with its Analytics engine have affected sales or not.
In the end, I have trouble believing that these recent cuts are solely based on improving Desire2Learn’s growth without needing to correct for slower-than-expected growth. The arguments made by our sources that these are significant cuts driven by not hitting growth targets are compelling. However, there is no smoking gun that I have found to back up these claims definitively.
What I do feel confident in saying regarding employee numbers is that there’s more to the story here than just ‘churn’ alongside aggressive hiring. Since the end of the Blackboard patent lawsuit, Desire2Learn has grown at an average of 13 employees per month (140 in Nov 09, 560 in Sep 12, 750 in Nov 13). Yet the numbers might have actually gone down since July 2013:
- At FUSION in July the company indicated they had more than 800 employees; yet
- Today the company has ~750 employees.
Unless the company employed more than 100 summer interns, it appears that the growth in headcount has stopped, if not reversed. I do not know how these public numbers relate to the comment about hiring 140 and letting go 56 since June.
I suspect that the problems with the Analytics engine (described by Michael in this post) is having more of an impact than the fallout from January system outages. Desire2Learn invested heavily in its analytics and student success system, yet I have not seen any significant customer wins based on these product lines (although Degree Compass is showing some promise). Conversely, I am not aware of any real problems with the Summer 2013 or Fall 2013 start-of-term system performance, so perhaps Desire2Learn has recovered from the January outages.
Furthermore, the changes being made to refocus product development and even pull back on some of the product release plans makes sense to me. I think that Desire2Learn has overextended itself and would benefit from focusing more on its core product and making sure that its existing customers are happy.
The picture I get is that the truth is somewhere in the middle and yes, I believe there is more to the story than reported in the news article. I believe that Desire2Learn most likely had a difficult year in terms of failing to meet growth targets. Based on these results the company probably had to restructure and reduce middle management layers and headcount in several groups (mostly in product development and marketing). But at the same time, this is a company with financial resources to continue investing in capital projects and product improvements, and even in additional hiring.
This is a situation to keep watching over time, and we’ll keep you posted as we learn more.
Big data is on almost every company’s radar these days. It’s long past the domain of mining and finance companies, and now executives across all industries are looking for ways to explore the potential it promises. However, many struggle to justify the expense of a big data project that doesn’t offer a clear ROI.
In this short video, Pythian’s CTO, Alex Gorbachev, and I dispel some myths surrounding big data and discuss different ways to calculate ROI for top- and bottom-line returns. Whether you’re looking to improve operations or mine data for insights and then commercialize your findings, this video will help you determine the best approach.
1) The infrastructure or framework behind Cloud Control
2) Cloud Control Management
3) Capacity Planning
4) Exadata/Exalogic Management
5) Configuration Management
6) Provisioning/Patching, Application Management
7) Database Management
8) Fusion Middleware Management
9) Middleware Management
10) Application Quality Management
More details will be shared soon. Thanks, Ingress IT Solutions
We need to use several overridden methods from ADF BC API, to track each View Object instance activation time. We are interested in View Object instance activation time, as this is where actual SQL execution and data fetch happens during activation - most of the time during activation is consumed here. Activation happens in following order:
1. Application Module instance is activated
2. View Object instance executes SQL statement
3. View Object instance fetches records from DB
4. Next View Object instance executes SQL statement
5. Next View Object instance fetches records from DB
As you can see, Application Module finishes activation before View Object instances are starting activation. This is the reason, we need to override several methods from ADF BC, to properly track activation time for each View Object.
Sample application - stresstest_v4.zip is using similar concepts as implemented in our ADF performance audit tool - Major Release for Red Samurai Performance Audit Tool v 2.0. Generic Application Module Implementation class overrides activateState method, we are using it to set activation event identifier, remember - Application Module is activated before View Objects. We are able to group later all View Objects activated during the same activation event by this identifier - stored temporary in ADF BC memory scope of user data:
Activation start time for each View Object instance is recorded in prepareForActivation method. End time is logged in activateCurrentRow method - this method is invoked after SQL query execution and data fetch, it makes it perfect place to log end time per individual View Object instance:
Application Module Pooling for sample application is disabled - this allows to test how activation time is logged:
This sample application UI - Jobs and Departments View Objects are exposed, both of them should participate in activation events:
Here we can see the log - activation events for Jobs and Departments are started right before Application Module activation ends. Jobs View Object activation ends after Jobs SQL query was executed and data was fetched, activation for this View Object is completed in 16 milliseconds:
Departments View Object is activated next - SQL query is executed, along with data fetching in 47 seconds. Keep in mind - last View Object activation time always will be the longest - as all View Objects instances are prepared for activation before the first View Object instance was activated. Activation process is sequential, each View Object instance one by one. Last View Object instance activation time will show total activation time for all View Objects:
On this article I will show how to install a SQL Server 2012 clustered instance in a cluster of two nodes. In general, the installation will be done in two parts:
- New instance installation in one of the nodes.
- Add the other node to the existing clustered instance.
For a cluster with more than two nodes, we would need to perform the first step in one of the nodes, and repeat the second step on all other nodes.
What is a clustered instance?
Basically, a clustered instance is a SQL Server instance installed over a Windows Failover Cluster (WFC) service. The main purpose of a WFC solution is protect our systems from hardware failures.
In a scenario of a cluster with two nodes, we are talking about two servers, with similar hardware configuration, connected by a Failover Cluster service. Having one SQL Server instance installed over this solution, we can call this instance as a clustered instance. That clustered instance must be active in only one of the available nodes, and this means that the other nodes will be in IDLE mode, with no active functions.
Another important point is that the WFC accepts shared storage, which means that we need a SAN to store the database files (logs and data). However, the SQL Server binaries generated by the installation should be in a local disk.
Other than shared storage, we also have an option to store our database files into a SMB Fileshare, which is cheaper, but not as good as a solution using SAN. From SQL Server 2012 we have an option to store the TempDB isolated in a local disk, which brings lots of benefits.
This way, the WFC is a high availability solution and not a load balancing or a disaster recovery solution. We can reach this by having an AlwaysOn configuration, available from SQL Server 2012.
I’m assuming that at this point we already have a built cluster solution with two or more nodes. Normally, the DBA receives the environment ready to install the clustered instance. The WFC build is usually made by the System Administrators. However, I’m planning on doing another article explaining how to configure a WFC solution. Stay tuned!
Before we start the installation, we need to assure that we have the following items ready to be used:
- A virtual hostname. In our example we will use “SQL04″.
- A virtual IP, a.k.a vIP. We will use: 192.168.123.124.
- Available shared storage. The best practice is have, at least, one for Data files (mdf and ldf), one for Log files (ldf) and one for Tempdb files. On this guide I will use one disk for everything, to simplify, but this is a bad approach!
- Service Accounts: One for SQL Server Engine and another for SQL Server Agent (this is the best practice). We will use the following accounts: SSLAB\SVCSQLSRVENG and SSLAB\SVCSQLAGT.
- Notice that the service accounts are domain accounts. We have no other choice, to build a cluster we need to be part of a domain!
On this step-by-step guide, we will use the following environment – based in virtual machines:
- Windows Server 2012 R2 nodes:
- W2012SRV03 – 192.168.123.205
- W2012SRV04 – 192.168.123.206
- The both nodes are part of the following cluster:
- W2012CLT02 – 192.168.123.111
- As this is a lab, I’m using a Synology Diskstation as my SAN. Just for information, the IP is: 192.168.123.103.
- For SQL Server:
- vHostname – SQL04
- vIP – 192.168.123.124
- Version: Microsoft SQL Server 2012 (SP1) – 11.0.3128.0 (X64) - Enterprise Edition
Installation Permissions for the used login
To install the SQL Server I’m using the domain login called “SSLAB\dba”, which is part of the Administrators group on W2012SRV03 and W2012SRV04. The login “SSLAB\dba” is a simple user into the domain, without special permissions.
Tomorrow I’ll post the continuation of this article, showing how to do the actual installation of the first node. Stay tuned!!
You may have previously seen a short post I did on a SQL statement to identify which statements are using dynamic sampling.
If not, quick recap:
SELECT p.sql_id, t.val FROM v$sql_plan p , xmltable('for $i in /other_xml/info where $i/@type eq "dynamic_sampling" return $i' passing xmltype(p.other_xml) columns attr varchar2(50) path '@type', val varchar2(50) path '/') t WHERE p.other_xml IS NOT NULL;
This uses the incredibly powerful XMLTABLE functionality, there’s so much that can be done with it.
Here are a couple of other utilities I used recently which also highlight the powerful convenience of SQL and XML.
First up, I don’t know if this is useful to anyone but I had a crappy refresh script which should have been creating table partitions with SEGMENT CREATION DEFERRED but wasn’t.
So there was a reasonable amount of space wastage caused by empty segments.
How to identify? See below.
Could be combined with DBMS_SPACE_ADMIN.DROP_EMPTY_SEGMENTS to clean up?
WITH subq_pos_empty AS (SELECT t.table_owner , t.table_name , t.partition_name , x.cnt FROM dba_segments s , dba_tab_partitions t , xmltable('for $i in /ROWSET/ROW/CNT return $i' passing xmltype( dbms_xmlgen.getxml ('select count(*) cnt ' ||'from '||t.table_owner||'.'||t.table_name||' PARTITION ('||t.partition_name||') ' --||'SAMPLE(.01)' -- If you want to sample to speed up unexpected large seg counts )) columns cnt number path '/') x WHERE s.segment_type = 'TABLE PARTITION' --AND t.table_owner LIKE 'XYZ%' AND t.table_owner = s.owner AND t.table_name = s.segment_name AND t.partition_name = s.partition_name AND t.num_rows = 0 AND t.partition_position > 1) SELECT * FROM subq_pos_empty WHERE cnt = 0 ORDER BY table_owner , table_name , partition_name;
SQL> create table t1 2 (col1 date 3 ,col2 number) 4 partition by range(col1) interval (numtodsinterval(1,'DAY')) 5 (PARTITION p0 values less than (to_Date(20130101,'YYYYMMDD')) segment creation immediate 6 ,PARTITION p1 values less than (to_Date(20130102,'YYYYMMDD')) segment creation immediate) 7 ; Table created. SQL> exec dbms_stats.gather_table_stats(USER,'T1'); PL/SQL procedure successfully completed. SQL> WITH subq_pos_empty AS 2 (SELECT t.table_owner 3 , t.table_name 4 , t.partition_name 5 , x.cnt 6 FROM dba_segments s 7 , dba_tab_partitions t 8 , xmltable('for $i in /ROWSET/ROW/CNT 9 return $i' 10 passing xmltype( 11 dbms_xmlgen.getxml 12 ('select count(*) cnt ' 13 ||'from '||t.table_owner||'.'||t.table_name||' PARTITION ('||t.partition_name||') ' 14 --||'SAMPLE(.01)' -- If you want to sample to speed up unexpected large seg counts 15 )) 16 columns cnt number path '/') x 17 WHERE s.segment_type = 'TABLE PARTITION' 18 --AND t.table_owner LIKE 'XYZ%' 19 AND t.table_name = 'T1' -- Comment out 20 AND t.table_owner = s.owner 21 AND t.table_name = s.segment_name 22 AND t.partition_name = s.partition_name 23 AND t.num_rows = 0 24 AND t.partition_position > 1) 25 SELECT * 26 FROM subq_pos_empty 27 WHERE cnt = 0 28 ORDER BY 29 table_owner 30 , table_name 31 , partition_name; TABLE_OWNER TABLE_NAME PARTITION_NAME CNT ------------------------------ ------------------------------ ------------------------------ ---------- PGPS_UAT1 T1 P1 0
Secondly, a helper for partitions and that nasty LONG column which can be used for partition maintenance to roll off oldest partitions:
SELECT table_name , partition_name , hi FROM (SELECT t.table_name , t.partition_name , t.partition_position , x.hi FROM user_tab_partitions t , xmltable('for $i in /ROWSET/ROW/HI return $i' passing xmltype( dbms_xmlgen.getxml ('select high_value hi from user_tab_partitions x' ||' where x.table_name = '''||t.table_name||'''' ||' and x.partition_name = '''|| t.partition_name|| '''')) columns hi number path '/') x --WHERE partition_position > 1 --AND table_name = i_table_name ) --WHERE hi <= i_date_yyyymmdd ;
This works an awful lot more easily if you have range/interval partitioning on a number – which most people probably don’t have.
For the more normal DATE range partitioning, it’s only slightly more fiddly.
I haven’t spent too long thinking about it so there may be a better way, but I tried to avoid the deprecated EXTRACTVALUE approach:
SELECT t.table_name , t.partition_name , t.partition_position , to_date(x2.dt,'YYYYMMDDHH24MISS') hi FROM user_tab_partitions t , xmltable('for $i in /ROWSET/ROW/HI return $i' passing xmltype( dbms_xmlgen.getxml ('select high_value hi from user_tab_partitions x' ||' where x.table_name = '''||t.table_name||'''' ||' and x.partition_name = '''|| t.partition_name|| '''')) columns dt varchar2(4000) path '/') x , xmltable('for $i in /ROWSET/ROW/DT return $i' passing xmltype(dbms_xmlgen.getxml(q'[select to_char(]'||x.dt||q'[,'YYYYMMDDHH24MISS') dt from dual]')) columns dt varchar2(16) path '/') x2 ;
SQL> alter session set nls_date_format = 'DD-MON-YYYY HH24:MI'; SQL> SELECT t.table_name 2 , t.partition_name 3 , t.partition_position 4 , to_date(x2.dt,'YYYYMMDDHH24MISS') hi 5 FROM user_tab_partitions t 6 , xmltable('for $i in /ROWSET/ROW/HI 7 return $i' 8 passing xmltype( 9 dbms_xmlgen.getxml 10 ('select high_value hi from user_tab_partitions x' 11 ||' where x.table_name = '''||t.table_name||'''' 12 ||' and x.partition_name = '''|| t.partition_name|| '''')) 13 columns dt varchar2(4000) path '/') x , xmltable('for $i in /ROWSET/ROW/DT 14 15 return $i' 16 passing xmltype(dbms_xmlgen.getxml(q'[select to_char(]'||x.dt||q'[,'YYYYMMDDHH24MISS') dt from dual]')) 17 columns dt varchar2(16) path '/') x2 18 WHERE t.table_name = 'T1'; TABLE_NAME PARTITION_NAME PARTITION_POSITION HI ------------------------------ ------------------------------ ------------------ ----------------- T1 P0 1 01-JAN-2013 00:00 T1 P1 2 02-JAN-2013 00:00 SQL>
Warning about the XMLTABLE approach – if running on versions less than 11.2, you may occasionally run into some ORA-00600 bugs.
I had access to the first Early Adopter a few weeks before it hit the OTN download page, so it feels like I’ve been using some flavour of SQL Developer 4 for ages. I’m kinda old-school, so I still find myself working with a text editor (UltraEdit) and SQL*Plus a lot, but I’m trying to use SQL Developer more these days. The addition of the Performance Reports (AWR, ADDM and ASH) was certainly a nice touch.
Tim…SQL Developer 4.0 was first posted on December 12, 2013 at 4:51 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Using AutoConfig Tools for System Configuration
- adautocfg.sh - This script is used for running AutoConfig.
- adchkcfg.sh - This script may be run before running AutoConfig to review the changes on running AutoConfig.
- admkappsutil.pl - This script is used while applying patches to the database tier. Running this script generates appsutil.zip, which may be copied over to the database tier to migrate the patch to the database tier.
- adRegisterWLSListeners.pl - This script is used to listen to changes to the WebLogic Server configuration parameters and update the context variables accordingly.
- adSyncContext.pl - This script is used to explicitly pull the values of the WebLogic Server configuration parameters and synchronize the context variable values like synchronization of the OHS parameters.
- GenCtxInfRep.pl - This script can be used to find out detailed information about context variables and the templates in which they are used, given all or part of a context variable name as a keyword.
- adtmplreport.sh - This script can be used to gather information regarding the location of the AutoConfig templates, provided the location of the instantiated files and vice versa.
Run adRegisterWLSListeners Tool on the Application Tier
- The adRegisterWLSListeners tool has been introduced to perform synchronization of context variables with the associated Oracle WebLogic Server configuration parameters.
- This tool does not listen for changes to the Oracle HTTP Server configuration parameters.
- Once started, adRegisterWLSListeners keeps running, listening for changes to the WebLogic Server configuration and synchronizing the context files stored in the database.
- This tools starts/stops along with the WebLogic Admin Server.
Run SyncContext on the Application Tier
- The adSyncContext.pl script reads the WLS configuration parameter values and synchronizes them with the context variables.
- This is used to synchronize the OHS configuration parameters with the respective context variables.
- This mechanism is called the 'feedback loop‘
- The SyncContext tool is one of the tools used for explicit synchronization of the context variables with the WLS configuration parameters.
- In Oracle E-Business Suite Release 12.2 some important configuration files like httpd.conf and ssl.conf are no longer maintained by AutoConfig. Oracle Enterprise Manager 11g Fusion Middleware Control should be used to maintain these configuration files as well as making additional changes to Context File variables.
As commerce becomes more digitalized each day, security threats to online business increase alongside it. Database security is a fast-growing market in the IT industry and for good reason. Cybercriminal activity is becoming a serious problem for many businesses who operate mainly online, and companies are constantly looking to advance security measures to ensure the protection of their customers as well as themselves.
Although the next big leap in digital security has yet to arrive, online businesses can still take measures to maximize the effectiveness of their current protection methods. A guide from The Wall Street Journal provided some guidance on how companies can get the most out of their security systems without having to stretch their budgets any further.
Too many businesses install new software and computer systems without customizing the security options that come standard with the products. Instead of personalizing account names and passwords , IT managers and employees alike fail to make their security very tight. The Wall Street Journal noted many companies that still use stock usernames such as 'administrator' and paper-thin passwords like "01234" may think they're saving time by keeping things simple, but they are actually risking the safety of their information and failing to use their security investments wisely.
The Wall Street Journal also reminded online businesses that rely on ecommerce to consider outsourcing their payment systems to a service that specializes in transaction security. Although creating an in-house payment method might seem like the most logical choice at first, companies will thank themselves in the long run by recruiting services such as PayPal to handle it for them. Too many compliance standards and security risks can make developing a safe transaction channel a nightmare for software teams and may not seem worth the hassle.
Don't forget mobile security measures
A common pitfall for many online businesses is their failure to properly optimize mobile security for their employees and customers, leaving a wide open opportunity for hackers to infiltrate their databases. A recent report from Business2Community asked Bistech, a UK IT leader, how companies can reduce their chances of a mobile break-in.
Businesses should create foolproof security policies in the workplace and apply them directly to smartphone and tablet platforms. Mobile devices are inherently less secure due to their ability to be accessed anywhere, and companies with bring your own device policies are especially at risk because of this. By having a universal security rulebook, companies won't have to worry about weak links outside the office.
So the annual UKOUG technology conference has come and gone for yet another year. This time it was in a new location, having moved away from its regular berth in Birmingham. Manchester is not a city I’m that familiar with, the only previous time I had been there was a trip with my wife, and she drove me to a near death experience with an oncoming tram.
Thankfully this time the only near death experiences for me this year was laptop failure during a demo, though I did hear of some folks getting into a scrape or two.
Yes, there were less people about than previous years, but that can be explained by the Apps folks having split off to their own conference. The venue was I thought pretty reasonable, while some rooms were pretty small, as lots of people commented, better presenting in a full small room than a fairly empty large room. So for me, the venue worked fine. Manchester is likely to be harder for more people to get to than Birmingham though, and I will be very interested which city the conference will be in next year, I understand it will be moving again.
So onto my experience at the conference. Certainly, my conference experience has changed a lot over the past few years, and the meeting people aspect has come more to the fore. I had a strange conference in terms of sessions I attended: I barely attended a database session! It was all storage/IO or operating systems. In some ways, you could look at this as a bit alarming, as I’m not sure the value add is necessarily at that level of the stack, however for me it’s really where my career has been for most of the time.
The quality of the presentations I did see was outstanding, and that for me is a critical thing of the UKOUG Annual Conference: the quality of the presentations. I don’t think I saw a bad one. The highlights for me where the following three though:
Luca’s work is just awesome, and he has developed some latency visualisations tools, which were very interesting to see.
Round tables can either be really eye-opening or fairly uninteresting. It all depends on who is contributing what. This roundtable was in the eye-opening camp. Joel Goodman chaired this excellently and it was great to see the only 12c Grid Infrastructure implementation in the room was one done by e-dba and my colleague Svetoslav Gyurov.
This was actually given at Oak Table World running alongside the UKOUG conference. This was an excellent, eye-opening presentation on hadoop.
The other thing that is important to me at conferences, compared to say 6-7 years ago is presenting. I’m not sure I’d like to go to UKOUG and not present. Sadly, this year due to other commitments from colleagues, I ended up picking up 4 presentations. This is too much for me to focus on at once. Thankfully though, 3 of them were on Exadata, and one of those a panel discussion. But 4 is quite a stressful amount.
So, some thoughts on my presentations.
This is a presentation I have delivered quite a few times and am very familiar with the content, thankfully this went well and I was reasonably happy with it in the delivery, and I was able to add some stuff on the (now) newly available Exadata X4′s to keep it fresh! You can grab this presentation if you are interested. Be aware there is lots of text to read in the notes field, even if the slide is fairly minimal.
Next was 2 on Tuesday:
I stepped in at the last minute for this one, and did 20 minutes on Exadata. I did not like this presentation – it was too much of a cut ‘n’ shut job. One attendee complained afterwards that there was not any content on actually using Exalytics with Exadata.
I couldn’t fail to agree.
So, I had not heard of Linux Containers until about 2 weeks prior to this presentation, but it turned out this was the one I had most been looking forward to. I’d done the most work leading up to UKOUG on this presentation as I had to learn it from scratch in the 2 weeks (as well as the other presentations, and the day-to-day job!). This was meant to show what Containers were, why you might use them and then demo them in action. I really thought the slides looked cracking (someone please take me aside and have a word if I’m out to lunch on this), and though I was hesitant about the content at times, I thought I’d almost pulled it off.
Then the demos, which at first were going fine, started to kill my laptop. and I mean kill. Not just the demos not working, but the laptop being totally utterly unresponsive, so I could not even get back to powerpoint from my VM. It was horrible. The presentation, though about 80% done, just came to an abrupt car crash of an ending. It took about 15 minutes after this to even power of my laptop!
As Tom Kyte put it the next day:
Image courtesy of Marc Fielding.
Both Andy and Frits are awesome and it was a privilege and a pleasure to share the stage with them. We had an excellent discussion and lots of interaction with the audience which I think made it a really worthwhile hour.
I’d love to do something like that again!
Again, image courtesy of Marc Fielding.
It was a great conference, I really had a great time, and I feel lucky and indeed proud to be part of a great Oracle community and to know so many outstanding individuals.
I just created a simple Instance_of method that does nothing but returning the name of the object. It was my first step in making things spiffier later on. Keeping in mind that it probably should become an overloading function with a parameter. Then I created a Natural Person type:
CREATE OR REPLACE TYPE "DWN_PERSON" AS OBJECT
, member function instance_of return varchar2
) not final;
CREATE OR REPLACE TYPE BODY "DWN_PERSON" AS
member function instance_of return varchar2 AS
And because I was going so well, I created a Not Natural Person:
CREATE OR REPLACE TYPE "DWN_NATURAL_PERSON" under DWN_PERSON
( surname varchar2(100)
, overriding member function instance_Of return varchar2
CREATE OR REPLACE TYPE BODY "DWN_NATURAL_PERSON" AS
overriding member function instance_Of return varchar2 AS
Then I created the following little script:
CREATE OR REPLACE TYPE "ZMS_REF"."DWN_NOT_NATURAL_PERSON" under DWN_PERSON
( companyname varchar2(100)
, overriding member function instance_Of return varchar2
CREATE OR REPLACE TYPE BODY "ZMS_REF"."DWN_NOT_NATURAL_PERSON" AS
overriding member function instance_Of return varchar2 AS
All three persons are declared as a 'DWN_PERSON'. So when I call the instance_of method of the particular object I expected that the method of the particular declared type is called. But, a little to my surprise, it turns out that Oracle uses the method of the Instantiated type:
function create_person(p_name varchar2, p_surname varchar2, p_comany_name varchar2) return dwn_person
if p_surname is not null then
l_person := dwn_natural_person(p_name, p_surname);
elsif p_comany_name is not null then
l_person := dwn_not_natural_person(p_name, p_comany_name);
l_person := dwn_person(p_name);
l_person1 := create_person('Flip', 'Fluitketel', null);
dbms_output.put_line('l_person1 is a '||l_person1.instance_of);
l_person2 := create_person('Hatseflats', null, null);
dbms_output.put_line('l_person2 is a '||l_person2.instance_of);
l_person3 := create_person('Hatseflats', null, 'Hatseflats Inc.');
dbms_output.put_line('l_person3 is a '||l_person3.instance_of);
Simple, but apparently very effective.
l_person1 is a DWN_NATURAL_PERSON
l_person2 is a DWN_PERSON
l_person3 is a DWN_NOT_NATURAL_PERSON
The concept of big data has been thrown around the tech industry for years now, but what are companies doing to make sure they harness its power properly and effectively? More importantly, how do companies designate responsibility for this job within their organizations? According to Forbes, more businesses are finding it difficult to attract the human capital necessary to make the most out of what big data has to offer. The highly desired position of data scientist is becoming more valuable as data continues to increase exponentially.
Database experts have always been key players in the industry, but their services are becoming more crucial as companies begin to lag behind the constantly growing stream of data brought on by dynamic mobile platforms. With more data available to analyze, more strategies and possibilities for optimization arise with the proper resources. Forbes explained that big data is key for businesses wanting to automate workflow and maximize efficiency, but that hiring the right personnel to do these jobs is becoming expensive and harder to come by. Many companies are seeking software to fill this niche labor market.
"Adding more humans like expensive data scientists is not the solution – software is the answer. More data, more people, more complicated questions. You can't just make up data scientists," Bruno Aziza, CMO of Alpine Data Labs, told the news source. Professionals like Aziza are looking for the next advancement in software to integrate into their workloads and allow employees in all branches of the company to utilize big data with hands-on features such as collaborations and visual representations.
Database management remains a top priority
Companies in need of streamlined database management services are constantly in search of ways to lower costs and gain efficiencies, and this is reflected in a rapidly expanding cloud computing footprint within the IT industry. According to a recent report from Seeking Alpha contributor Trefis, software development market leaders Oracle are seeing steady growth with the increased demand for optimized database solutions.
With a market capitalization of over $150 billion, Oracle represents the second largest company in the industry and has caught up with cloud adaptation after partnering with Microsoft and Salesforce last quarter. IT developers such as Oracle are striving to be the ones that offer the next groundbreaking solution in big data analytics and usher in the next generation of business optimization.
Retrouvez toute les informations sur http://www.oracle.com/us/products/database/exadata/database-machine-x4-2/overview/index.html