Oracle GoldenGate 12c (184.108.40.206.0) has changed the information that is stored in the trail files. All the standard information is still there. What Oracle changed has to do with the meta data that is used to define a table.
Note: If you want a understand how to use log dump and general trail information, look here.
Prior to 220.127.116.11.0 release of Oracle GoldenGate, if the column order of tables between source and target were different, you needed to generate a “definition” file using the “defgen” utility located in $OGG_HOME. This file allowed you to specify either a source or target definitions file which could be used to map the order of columns correctly. This was a nice tool when needed.
In 18.104.22.168.0, Oracle took this concept a little bit further. Instead of using a definitions file to do the mapping between source and target tables; Oracle has started to provide this information in the trail files. Review the image below, and you will see the table definition for SOE.ORDERS, which I run in my test environment.
Notice at the top, the general header information is still available for view. Directly under that, you will see a line that has the word “metadata” in it. This is the start of the “metadata” section. Below this is the name of the table and a series of number categories (keep this in mind). Then below this, is the definition of the table with columns and the length of the record.
A second ago, I mentioned the “numbered categories”. The categories correspond to the information defined to the right of the columns defined for the table. When comparing the table/columns between the database and trail file, as few things stand out.
In column 2 (Data Types), the following numbers correspond to this information:
134 = NUMBER
192 = TIMESTAMP (6) WITH LOCAL TIME ZONE
64 = VARCHAR2
In column 3 (External Length), is the size of the data type:
13 = NUMBER(12,0) + 1
29 = Length of TIMESTAMP (6) WITH LOCAL TIME ZONE
8 = VARCHAR2 length of 8
15 = VARCHAR2 length of 15
30 = VARCHAR2 length of 30
There is more information that stands out, but I’ll leave a little bit for you to decode. Below is the table structure that is currently mapped to the example given so far.
Now, you may be wondering, how do you get this information to come up in the logdump interface? Oracle has provided a logdump command that is used to display/find metadata information. This command is:
There are a few options that can be passed to this command to gather specific information. These options are:
DDR | TDR
NEXT | INDEX
If you issue:
You will get information related to Data Definition Records (DDR) of the table. Information this provides includes the following output:
If you issue:
You will get information related to Table Definition Record (TDR) on the table. Information provide includes the output already discussed earlier.
As you can tell, Oracle has provided a lot of information that is traditionally in the definitions files for mapping tables directly into the trail files. This will make mapping data between systems a bit easier and less complicated architectures.
Filed under: Golden Gate
This article will address the main techniques for the efficient management of multiple instances in a SQL Server database 2014 using the Tools Data Collector and Policy-Based Management. In addition, it will be demonstrated in a practical way how to configure each of the tools, as well as how to extract and analyze the metrics collected.
With the exponential growth of the amount of data generated by applications, comes the increased complexity in managing database environments for the database administrator. With this growth, combined with the low cost of storage media, servers began a scenario in which the database administrator left to administer dozens of databases and proceeded to administer hundreds of them.
Since the main responsibilities of a database administrator is to ensure the integrity, performance and stability of all instances of SQL Server under its administration, the greater the number of instances and databases used within an enterprise, the greater the difficulty in monitoring and managing such an environment in proactive and automated fashion.
For this type of scenario, SQL Server makes it possible to centralize both the execution of routine tasks for an administrator, since the collection of performance metrics from all instances and existing databases is through the Tools Data Collector (DC) and Policy-Based Management (PBM). For example, there is the need for all databases that have the recovery model parameter set to Full to perform a backup of the log file every hour. So, instead of this policy existing only as a concept, requiring you to check manually on all database servers, you can use the PBM to create “physically” a policy and ensure that it is applied at once in all instances of SQL Server.
In order to facilitate the understanding of the management of multiple instances, the presentation of the tools will be performed in the following order: first we will look at the Policy-Based Management and then we will know the Data Collector.
What is the Policy-Based Management?
The Policy-Based Management (PBM) is a feature available starting with SQL Server 2008 that enables the creation and implementation of policies on their SQL Server instances. The PBM works similarly to the created group policy through the Active Directory.
Note: Group policies provide centralized management of applications and users, by means of rules created by system administrators and that can be applied at various levels of the directory structure defined in Active Directory.
PBM applies a policy on a particular target, for example, a database, a table, a view, or a stored procedure and then it is checked to see if the target is in accordance with the rules of this policy. If the target does not agree, it is possible to both enforce the rules of politics as raise an alert to the administrator of the database so he/she knows of this violation.
One of the great advantages of the PBM is the implementation of a policy on multiple instances of a SQL Server database at once, facilitating the Administration and management of all the infrastructure of the Corporation Bank.
Many features of SQL Server 2014, such as Resource Governor, Data Compression and In-Memory OLTP need Enterprise Edition or Developer. This is not the case for the PBM, which is available in all editions of SQL Server, including Express (although with the Express Edition is not possible to create a Central Management Server).
As soon as the instance of SQL Server is installed in 2014, it is possible to create and evaluate the policies against any existing SQL Server in your environment, including in versions prior to 2014.
Policy-Based Management Components
The PBM is composed of three main components: Policies, Conditions and Facets, as shown in Figure 1. These components are arranged in a sort of hierarchical order for using the PBM. A facet is required for creating a condition, and the condition is necessary for the creation of policies. The policies, in turn, are applied to specific targets .
The targets are the managed objects for a particular policy and can be of various types: servers, databases, instances, stored procedures, etc. An important detail is that you can use more than one target at the same time in a policy. For example, we have a policy which States that only object names starting with the db _ prefix are correct and perform a validation of this rule on tables, functions and stored procedures of one or more instances at the same time.
A facet is a group of properties that relate to a particular target. SQL Server 2014 has 86 facets, each containing several different properties. This allows the use of hundreds of properties in the creation of a policy.
You can view the properties of a facet expanding Facets folder and double-clicking any of the options. For example, the facet Data File has several properties, such as maximum size of the data file, number of readings and writings and if the data file is online, as shown in Figure 2.
Note: The facets are available as read-only, i.e. it is not possible to create customized facets or modifications of existing ones. Currently new facets can be included only by Microsoft, through service packs or by upgrading the version of SQL Server.
A condition can be described as a rule for a policy to be evaluated. Basically, the rule checks a target and, if this target is not in accordance with the policy, it fails. It is worth mentioning that a policy can evaluate only one condition, however it is possible that a condition has different rules for different properties. A condition can be viewed by expanding the Conditions folder and double-clicking any one of the available options, as shown in Figure 3.
Note: In an instance of SQL Server, a database will not exist conditions customized unless previously imported or created manually, that is, initially there will only be the conditions of system.
The policies are complete packages that include conditions, facets, targets, assessment modes and Server restrictions (the evaluation modes and the server are discussed in the next topic).
When created, the policies are stored in the system database msdb, but you can export them into an XML format. This portability allows database administrators with greater ease to share and compare the policies created. To view a policy it is necessary to expand the Policies folder and double-click any one of the options, as shown in Figure 4.
Note: In an instance of SQL Server, databases will not exist as customized policies, unless previously imported or created manually, that is, there will be only the initial policies.
Policy Evaluation Modes
The PBM has four distinct ways of performing a policy and that determines how the evaluation will occur under a predefined target. The following modes of evaluation may be available, depending on the facet used in policies:
- On Demand: This evaluation mode specifies that the implementation should occur manually. By default, any policy with this evaluation mode is disabled automatically after it is created. However, even though I disabled it it can still be evaluated at any time.
- On Schedule: By selecting this mode you can schedule the evaluation policy to be evaluated at any time. By default, you can select a schedule already created or create a new schedule that meets your needs. Creating a schedule allows you to set options such as the recurrence of execution, execution frequency per day, frequency of execution per hour and how long a policy should be executed. For example, you could run a particular policy for the next two weeks.
- On Change: Log Only: when you select this mode, the policy will be evaluated only if a change is made to the target specified. If the change violates the policy, the event will be executed and the results of the violation will be stored in the event log and in the system database msdb. This evaluation mode helps the database administrator without affecting the performance of the environment.
- On Change: Prevent: This evaluation mode is very similar to the On Change: Log Only, namely, the assessment will be the moment an event to perform any change in target. But unlike the Log Only option, Prevent performs the rollback procedure of any amendment which violates the policy.
The Figure 5 shows an example of a policy and evaluation modes available for the same.
In conjunction with the targets and the facets, the server restrictions are another way to control how a policy is evaluated. A server restriction is nothing more than a condition used to delete a particular policy server through the facet Server.
With the use of the PBM, you can create a server restriction to limit the evaluation of a policy only on instances of SQL Server using the Standard editions or Enterprise. When this policy is applied it will not be assessed by the instances that do not use these specific issues.
Management of Policies
SQL Server 2014 has some features that facilitate the management and evaluation of policies created. One of these features is the ability to create categories to group similar policies and use the Central Management Server (CMS) to execute the policies throughout the database environment.
The categories are a logical group of one or more policies that assist in the management and execution of the same. For example, you can create a policy group that will be evaluated only in test or development environments. When a policy is created, specify a category Description option, as shown in Figure 6.
Figure 6. Category definition
Central Management Server (CMS)
CMS functionality is not part of the architecture of the PBM, but has become extremely important in the use of policies in a SQL Server database consisting of multiple servers.
Through the CMS you can specify a database instance (or greater) to be a central management and store a list of registered instances that can be organized into one or more groups, as shown in Figure 7.
Once the policies are configured and implemented, there is no need to constantly check the servers to make sure that they are in accordance with the conditions set out in the policies. Instead, we can use the SQL Server Agent alerts to receive notifications automatically when a policy is violated.
In the next post of this 3 part series we will learn how to create a policy and how to use it.
Discover more about our expertise in SQL Server.
In 2007, we launched our first book and in the last 8 years, we've released over 10 books on Essbase, Planning, Smart View, Essbase Studio, and more. (We even wrote a few books we didn't get to publish on Financial Reporting and the dearly departed Web Analysis.) In 2009, we started doing free day-long, multi-track conferences across North America and participating in OTN tours around the world. We've also been trying to speak at as many user groups and conferences as we can possibly fit in. Side note, if you haven't signed up for Kscope16 yet, it's the greatest conference ever: go to kscope16.com and register (make sure you use code IRC at registration to take $100 off each person's costs).
We've been trying to innovate our education offerings since then to make sure there were as many happy Hyperion, OBIEE, and Essbase customers around the world as possible. Since we started webcasts, books, and free training days, others have started doing them too which is awesome in that it shares the Oracle Business Analytics message with even more people.
The problem is that the time we have for learning and the way we learn has changed. We can no longer take the time to sit and read an entire book. We can't schedule an hour a week at a specific time to watch an hour webcast when we might only be interested in a few minutes of the content. We can't always take days out of our lives to attend conferences no matter how good they are. So in June 2015 at Kscope16, we launched the next evolution in training (epm.bi/videos):
#PlayItForward is our attempt to make it easier for people to learn by making it into a series of free videos. Each one focuses on a single topic. Here's one I did that attempts to explain What Is Big Data? in under 12 minutes:
As you can see from the video, the goal is to teach you a specific topic with marketing kept to an absolute minimum (notice that there's not a single slide in there explaining what interRel is). We figure if we remove the marketing, people will not only be more likely to watch the videos but share them as well (competitors: please feel free to watch, learn, and share too). We wanted to get to the point and not teach multiple things in each video.
Various people from interRel have recorded videos in several different categories including What's New (new features in the new versions of various products), What Is? (introductions to various products), Tips & Tricks, deep-dive series (topics that take a few videos to cover completely), random things we think are interesting, and my personal pet project, the Essbase Technical Reference.
Essbase Technical Reference on VideoYes, I'm trying to convert the Essbase Technical Reference into current, easy-to-use videos. This is a labor of love (there are hundreds of videos to be made on just Essbase calc functions alone) and I needed to start somewhere. For the most part, I'm focusing on Essbase Calc Script functions and commands first, because that's where I get the most questions (and where some of the examples in the TechRef are especially horrendous). I've done a few Essbase.CFG settings that are relevant to calculations and a few others I just find interesting. I'm not the only one at interRel doing them, because if we waited for me to finish, well, we'd never finish. The good news is that there are lots of people at interRel who learned things and want to pass them on.
I started by doing the big ones (like CALC DIM and AGG) but then decided to tackle a specific function category: the @IS... boolean functions. I have one more of those to go and then I'm not sure what I'm tackling next. For the full ever-increasing list, go to http://bit.ly/EssTechRef, but here's the list as of this posting:
- @ISANCEST and @ISIANCEST
- @ISCHILD and @ISICHILD
- @ISDESC and @ISIDESC
- @ISSIBLING and @ISISIBLING
- CALC ALL
- CALC DIM
- FIX and ENDFIX
- SET CalcParallel
- SET CalcTaskDims
- SET UpdateCalc
I think I'm going to go start working on my video on FIXPARALLEL.
Have you ever used Microsoft’s PowerBi service? If you have, have you used it within the last year? The reason I ask is that the current PowerBi service is so radically different from the initial release that pretty much only the name is what they have in common. Today I’m going to do a short summary of how we got here, where the service is, where it’s going and the actual topic for my video, the new awesome custom visualization functionality!
A Short History Lesson
A few years ago, Microsoft went on the direction of empowering business analysts on what is one of the most common business applications in the world: Excel. With this push, they started releasing some amazing plugins for Excel: PowerPivot, PowerQuery, PowerView, PowerMap. Suddenly we could import millions of rows from all kinds of sources into Excel! And transform them! And visualize them! Then with the release of Office 365 and Sharepoint Online, a service was created to make it easy to share and consume all these Excel-born reports. And thus PowerBi was born but it required all these other tools and subscriptions. It didn’t catch on.
Fast Forward To Today
This initial offering of PowerBi had too many external dependencies. You needed Excel with all the bells and whistles to do the reports and then all these other satellite cloud services. Thankfully someone saw and executed on a clearer vision: one service, no dependencies, no cost of entry, fully mobile friendly. It’s been a very interesting journey from a market perspective to see how MS released something, course corrected and then went head-on with their improved vision.
The PowerBi desktop designer is free. The mobile apps (all major mobile OS’es AND Windows Phone), free as well. The service itself also has a free fully functional tier, it’s only limited by the amount of data and the enterprise sharing capabilities. Add the ease of use of the tools and the natural language query capabilities and this is now a strong tool that can finally become a contender with the Tableau and Qlikviews of the world.
No, it’s not perfect but it is growing and an insane amount of new features are getting added constantly. New vendors are adding content packs and now custom visualizations have made an appearance.
The idea behind the first batch of custom visuals was great. MS opened a contest for people to submit their best custom visuals and the community responded with amazing creativity and participation. Not only do these immediately provide more value to every current user of PowerBi but they also serve as examples for further development of more custom visuals.
The full gallery of custom visuals can be found in the PowerBi Visual Gallery.
And if you don’t have your PowerBi account, try it out, it’s free! Head over to the official PowerBi site.
For the demo of working with a PowerBi project on the desktop, online and importing a custom visual, let’s check out the video! Enjoy!
Discover more about our expertise in SQL Server.
The show goes on. This Log Buffer Edition picks some blogs which are discussing new and old features of Oracle, SQL Server and MySQL.
- Directory Usage Parameters (ldap.ora) list the host names and port number of the primary and alternate LDAP directory servers.
- Data Visualization Cloud Service (DVCS) is a new Oracle Cloud Service. It is a subset offering of the currently supported Business Intelligence Cloud Service (BICS).
- ORA-24247: network access denied by access control list (ACL).
- Latches are low level serialization mechanisms, which protect memory areas inside SGA. They are light wait and less sophesticated than enqueues and can be acquired and released very quickly.
- handling disks for ASM – when DB, Linux and Storage admins work together.
- How to use the Performance Counter to measure performance and activity in Microsoft Data Mining.
- Phil Factor demonstrates a PowerShell-based technique taking the tedium out of testing SQL DML.
- Sandeep Mittal provides an introduction to the COALESCE function and shows us how to use it.
- Hadoop many flavors of SQL.
- Installing and Getting Started With Semantic Search.
- Support for storing and querying JSON within SQL is progressing for the ANSI/ISO SQL Standard, and for MySQL 5.7.
- Loss-less failover using MySQL semi-syncronous replication and MySQL Fabric!
- Memory consumption The binary format of the JSON data type should consume more memory.
- This post compares a B-Tree and LSM for read, write and space amplification. The comparison is done in theory and practice so expect some handwaving mixed with data from iostat and vmstat collected while running the Linkbench workload.
- If you do not have a reliable network access (i.e. in some remote places) or need something really small to store your data you can now use Intel Edison.
To Test the delayed durability feature with mirroring in high performance and high safety mode. The idea is to confirm what the performance improvement is of the transaction and if it has any benefit in high latency networks.
We have two databases configured with mirroring in high safety mode, which will require that commits happen at the secondary first. One database called “DalayedTEST” has delayed durability enabled with FORCED mode. Then I have configured performance counters to check latency and performance of each database each second, I have added mirroring, transaction, lock and physical disk counters so we can compare the performance when using delayed durability or not in a mirrored environment. Then we are going to switch mirroring to high performance and see its behaviour as well. No Witness is configured
Using SQLQueryStress tool I am loading 5000 transactions to a log table with 50 threads enabled. This is equivalent to 50 persons loading 5000 records to the same table at the same time. Should be a good test to analyze behaviour of the databases.Normal Database Test (Full Transactions) High Safety Mode
Locks and Transactions
A failover happens 30 seconds after starting the test. The database is able to failover and record count 98850.
Locks and Transactions
A failover happens 30 seconds after starting the test. The database is able to failover and record count 165757.
Mirroring High Safety Mode
Having delayed transactions enabled in a database with high safety mirroring improves performance under high contention scenarios. In this case having lots of transactions running at the same time and requesting for the same table object proved to be better, as the execution time was faster when using delayed transactions.
When checking the mirroring behaviour one can see that the mirroring log queue is bigger with delayed transactions. The difference is that with full transactions the queue is processed sequentially, which decreases the queue while the transaction completes. As a result, you will see a high spike at the beginning of the stress test which is the full queue , and then it decreases in time. When using delayed transactions one can see spikes spread evenly every 20 seconds or so, this means that one batch is processed, and then other batch is processed, and this process carries on until the queue is empty.
So having a sequential mirror queue processing vs. batch processing makes a difference in latency? It is clear that transaction delay is 3 times worse with delayed transactions as it will processes batches in a cyclic way, which will saturate more at the endpoint on every iteration and also keep the transaction waiting for a commit acknowledgement from the mirroring partner. So, having delayed transactions with high contention, it is faster locally as it has less logical reads/writes, but slows down the mirroring communication as it works in batches.
But total execution time was better with delayed transactions?
Having delayed transaction improves execution time but increases mirroring delay, so it is simply a matter of doing the math. The local processing was so much better that it compensated for the slower mirroring transaction delay and in general terms is faster in this scenario (I used a local network), however if you add other variables to the stress test, such as a high latency network for the mirroring, the end result might not be compensated and the total execution time can end up being around the same or worse.
Now, let´s see the numbers with a high performance mode:Normal Database Test (Full Transactions) High Performance Mode
Locks and Transactions
Delayed Database Test (Delayed Transactions) High Performance Mode
Locks and Transactions
Conclusion High Performance Mode
Mirroring High Performance Mode
Having delayed transactions means we have a database with high performance mirroring that is not improving performance, despite the fact that indicators show less locking and less logical reads. Seems the mirroring queue was not being handled properly. The mirroring queue is considerably bigger when using delayed transactions and despite the fact that we are working in high performance mode the general execution time is the worst in this test environment. After I realized this I ran other tests to see if the results were the same, and generally speaking the total execution time is almost the same, a difference of 4-10 seconds tops.
When using delayed transactions in all the tests we can conclude that consistently the mirroring queue, and the amount of data transferred is considerably larger than the one produced by full transactions.
When using high safety mode, it seems delayed transactions have a better disk performance, which compensate the higher mirroring/network values and results in a faster execution time. This situation can change if the network has high latency producing the same or worse performance.
When using high performance mode it seems delayed transactions have an ok to good disk performance, but it is not good enough to compensate the higher mirroring/network values, and results in the same or slower execution time.
I would recommend enabling delayed transactions only to fix high contention rates. Also avoid using it in high latency or very busy networks when mirroring high performance environments. These recommendations are based on a small test and in a controlled environment and should not necessarily extend to all environments out there. The rule of thumb is that using delayed transactions does impact mirroring performance, which by itself is a technology known to introduce certain performance issues depending on the system, so do not take the decision to use delayed transactions lightly and test first in a staging environment before rolling into production.
Discover more about our expertise in SQL Server.
It’s gettting to that time of year, so here are some of the sessions I’ve pencilled in for UKOUG Tech 15:Sunday 6th:
Probably the whole of the Development stream.Monday 7th:
9:00 – 9:50: Tom Dale – Fivium : An insight into Oracle Standard Edition, what’s included, what’s not, and how to plug some of those holes!
11:20 – 12:10: Me (et. al.) : Cost Based Optimizer – Panel Session
12:20 – 13:10: Franck Pachot – DBi Services : All About Table Locks: DML, DDL, Foreign Key, Online Operations,…
14:10 – 15:00: Maria Colgan – Oracle : Oracle Database In-Memory By Example
15:10 – 16:00: Philip Brown – Red Stack Tech : How to Evaulate the 12c In-Memory Option
16:30 – 17:20: Tony Hasler – Anvil Computer Services : Optimizer Roundtable
17:30 – 18:20: Joel Goodman – Oracle : Oracle Standard Edition RoundtableTuesday 8th
14:10 – 15:00: Luke Davies – Pythian : It’s Always the Network. Exonerating the Database
15:30 – 16:20: Me : Just Don’t Do It
16:30 – 17:20: Nelson Calero – Pythian : Evolution of Performance Management: Oracle 12c Adaptive OptimizationWednesday 9th
10:00 – 10:50: Carl Dudley – University of Wolverhampton : Oracle 12c New Features – The Lesser Spotted Variety
This blog post is a continuation to an earlier post about my musings on Oracle EBS support for virtual host names.
Actually, most parts of Oracle E-Business Suite work with virtual host names with out any problem. The only component that doesn’t work when using virtual host names are the Concurrent Managers. Concurrent Managers expect that the node name defined in the Concurrent Manager definition screen matches the host name FNDLIBR executable reads at the server level. Having the virtual host name as an alias in the hosts file in the server doesn’t cut it for the FNDLIBR executable. FNDLIBR reads the host name of the server using the Unix system call.
This behaviour of FNDLIBR can be hacked by overriding the Unix gethostname system call using LD_PRELOAD functionality. There is already a prebuilt program out there on github to achieve this functionality. It’s called fakehostname. I have tested this and verifies that it works with Oracle 11i, R12.0 and R12.1 version without any problem.
Here is a demo:
$ export LD_PRELOAD=/home/oracle/fakehost/libfakehostname.so.1
$ export MYHOSTNAME=ebsfakehost
$ export MYHOSTNAME=newebshost
This utility helps in making concurrent managers thinking that it’s running on the virtual host by overriding the gethostname system call. This method of getting EBS to work with virtual hostnames doesn’t work any more with EBS R12.2. The reason for this EBS R12.2 is that it it’s shipped in a mix of 32bit and 64bit components. Earlier releases of EBS like 11i, 12.0 and 12.1 are 32bit only, even though they run on 64bit platforms. We can get EBS R12.2 working by having both 32bit and 64bit versions of the fakehostname library in the LD_PRELOAD, but EBS borks too many warning messages about not being able to load 32bit/64bit libraries, which defeats the whole purpose of having a simple solution.
I am working on another way of getting virtual host names working in EBS R12.2. I will post that in my next blog post. Stay tuned!
Discover more about our expertise in the world of Oracle.
Another annual DOAG conference has passed, and I can only say the very best about it: Perfectly organized, large and modern location, impressive list of well known speakers and over 2100 attendees – wow!
My presentation Best of RMAN was scheduled at the first slot on the first day, so I was a bit concerned whether many people would attend that early. It turned out that the room got so full that I was asked by the organizers to deliver the same talk again next day – which I happily did, again with a packed room :-)
Apart from speaking myself, I enjoyed very much to see friends and colleagues again, as well as people I knew from social media before but never met in person yet. Thank you all for your good company guys – I may appear a bit quiet and distanced sometimes, but I really appreciate you and our little conversations. Personal highlight: The lady who approached me at the stairs, shaking hands and telling me that she knew me from an Oracle University class that she liked so much :-)
There were many interesting presentations, just to mention some of those I attended myself:
Frits Hoogland with Oracle Exadata and database memory
In his very detailed talk, he emphasized the importance of using Huge Pages on Exadata and that MEMORY_TARGET aka Automatic Memory Management is not an option here.
Jonathan Lewis presented Five Hints for Optimising SQL
I’m always amazed how much this man knows and how good he is able to explain it to us mere mortals :-)
Lothar Flatz was presenting Anatomie eines SQL Befehls (how to do Oracle Performance Tuning with a scientific approach)
During his very entertaining talk, he quoted no less than seven Oakies (including himself), so that was quite good advertising for the OakTable Network :-)
Frank Schneede delivered Eine Datenbank kommt selten allein (DB Cloning on Exadata using sparse diskgroups)
while Ulrike Schwinn presented Oracle DB Memory Techniken für mehr Performance (Overview about the meanwhile so many different ways to deal with memory in Oracle)
Couldn’t really catch her because she was running out of the picture all the time :-)
Martin Berger also did an interesting talk: Wie misst Oracle die Verwendung von Database Features? (How to make sense of DBA_FEATURE_USAGE STATISTICS)
I liked his presentation style with many live demonstrations very much!
My dear colleague Joel Goodman talked about Automatic Parallel Execution
Joel is what I call a ‘Standup Instructor’ – mention any database related topic and he will be able to deliver an excellent 30 minutes talk about it instantly :-)
A big THANK YOU to the many people from DOAG who helped to make that event take place again in such an impressive way! Hope to be there again next year.
- Oracle Data Visualization Cloud Service is Generally Available Now! (Oracle Partner Hub: ISV Migration Center Team)
via Oracle Partner Hub: ISV Migration Center Team https://blogs.oracle.com/imc/
- Partner Webcast - Oracle Private Cloud Solutions for IaaS and PaaS
via Blogs.Oracle.com/IMC - Slideshows by User: oracle_imc_team http://www.slideshare.net/
- IoT, Mobility and Cloud Integration Hands-On: Controlling Raspberry Pi GPIO Using Oracle MAF App and Oracle ADF Application Deployed on Oracle Java Cloud Service (Part II) (Oracle Partner Hub: ISV Migration Center Team)
via Oracle Partner Hub: ISV Migration Center Team https://blogs.oracle.com/imc/
With the announcement of Oracle GoldenGate 12c (22.214.171.124.0) at Oracle Open World this year, I was interested in hearing that Oracle has finally integrated the heartbeat process into the core functionality of GoldenGate. Setting up the heartbeat is always a fun thing to do with GoldenGate and I had hoped that this move to integrated it would bring benefits with it. To my dismay, I’m a little underwhelmed with the heartbeat that is integrated with GoldenGate 12c (126.96.36.199.0) now.
Note: If you would like to setup the older version of heartbeat, use My Oracle Support (MOS Note: 1299679.1) to reference the items needed.
Although, I’m not 100% sold on the new integrated heartbeat, it is a bit easier to setup than the process identified in the MOS note 1299679.1. And a little less confusing.
First thing that Oracle has done is add a few new GoldenGate parameters that are used to define heartbeat related items. I’ve used these parameters in the GLOBALS file, that way the changes effect the whole environment.
- HEARTBEATTABLE <table_name> = this parameter allows you to define the heartbeat table you want to use; default name will be GG_HEARTBEAT
- ENABLE_HEARTBEAT_TABLE | DISABLE_HEARTBEAT_TABLE = These parameters are used to either start or stop the heartbeat related items in the GoldenGate environment. (the parameters can be used in either GLOBALS, Extract or Replicat to enable/disable the heartbeat)
After making the changes to the GLOBALS file, then additional changes need to be made from GGSCI. If you were to issue a “help” command from the GGSCI prompt, you will notice there is now options to add a heartbeat table (similar to setting up the checkpoint table). The image below shows the commands that are now available in GGSCI.
To create the heartbeat table, you just need to simply run ADD HEARTBEATTABLE from GGSCI after logging into the database as the GoldenGate user.
After logging into the database and issuing the ADD HEARTBEATTABLE command, GoldenGate will create all the items needed for the heartbeat process to work. The below image shows all the objects that GoldenGate will create for heartbeat.
You will notice that the ADD HEARTBEAT command creates the following database objects:
<heartbeat_table>_SEED (default GG_HEARTBEAT_SEED)
<heartbeat_table> (default GG_HEARTBEAT)
<heartbeat_table>_HISTORY (default GG_HEARTBEAT_HISTORY)
Unlike the old way of creating a heartbeat, where GoldenGate used triggers; it is now using stored procedures to populated the tables. Then just like the old way, a scheduler job is used to execute the stored procedures to update or purge the tables. The main difference compared to the old heartbeat way is that there are views used to calculate the lag between processes.Closer Look at Heartbeat Table:
Taking a closer look at the heartbeat table, Oracle is using a lot of timestamps. This is similar to what is traditionally done in the older heartbeat setup. The one thing that should be noticed, is that you cannot customize the table to include DML and/or DDL statistics.
Note: I have not tried to customize the table, will test this a bit later to see if can add the items I would like to add, such as DML/DDL statistics.
—Heartbeat Table Structure—
CREATE TABLE “GGATE”.”GG_HEARTBEAT”
( “LOCAL_DATABASE” VARCHAR2(512 BYTE),
“HEARTBEAT_TIMESTAMP” TIMESTAMP (6),
“REMOTE_DATABASE” VARCHAR2(512 BYTE),
“INCOMING_EXTRACT” VARCHAR2(128 BYTE),
“INCOMING_ROUTING_PATH” VARCHAR2(4000 BYTE),
“INCOMING_REPLICAT” VARCHAR2(128 BYTE),
“INCOMING_HEARTBEAT_TS” TIMESTAMP (6),
“INCOMING_EXTRACT_TS” TIMESTAMP (6),
“INCOMING_ROUTING_TS” TIMESTAMP (6),
“INCOMING_REPLICAT_TS” TIMESTAMP (6),
“OUTGOING_EXTRACT” VARCHAR2(128 BYTE),
“OUTGOING_ROUTING_PATH” VARCHAR2(4000 BYTE),
“OUTGOING_REPLICAT” VARCHAR2(128 BYTE),
“OUTGOING_HEARTBEAT_TS” TIMESTAMP (6),
“OUTGOING_EXTRACT_TS” TIMESTAMP (6),
“OUTGOING_ROUTING_TS” TIMESTAMP (6),
“OUTGOING_REPLICAT_TS” TIMESTAMP (6),
SUPPLEMENTAL LOG DATA (ALL) COLUMNS
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE “GGATE” ;
—End Heartbeat Table Structure–
The end result here is that you now have a table that is a “canned” table for strictly reporting lag within the GoldenGate environment. This is great for identifying network bandwidth issues; however, any other metrics you would like to track would need to be gathered by other means.Closer Look at the GG_LAG View:
From looking at the views that are created with running ADD HEARTBEATTABLE from GGSCI, it is clear that Oracle took the logic that was in the old triggers and moved it into the views. The GG_LAG view also has a few additional information about path structure as well.
—Heartbeat View – GG_LAG —
CREATE OR REPLACE FORCE EDITIONABLE VIEW “GGATE”.”GG_LAG”
(“LOCAL_DATABASE”, “CURRENT_LOCAL_TS”, “REMOTE_DATABASE”, “INCOMING_HEARTBEAT_AGE”, “INCOMING_PATH”, “INCOMING_LAG”, “OUTGOING_HEARTBEAT_AGE”, “OUTGOING_PATH”, “OUTGOING_LAG”)
SELECT local_database, current_local_ts, remote_database,
(extract(day from in_hb_age) * 86400 + extract (hour from in_hb_age) * 3600 + extract(minute from in_hb_age) * 60 + extract (second from in_hb_age)) incoming_heartbeat_age,
(extract(day from in_lag) * 86400 + extract (hour from in_lag) * 3600 + extract(minute from in_lag) * 60 + extract (second from in_lag)) incoming_lag,
(extract(day from out_hb_age) * 86400 + extract (hour from out_hb_age) * 3600 + extract(minute from out_hb_age) * 60 + extract (second from out_hb_age)) outgoing_heartbeat_age,
outgoing_path, (extract(day from out_lag) * 86400 + extract (hour from out_lag) * 3600 + extract(minute from out_lag) * 60 + extract (second from out_lag)) outgoing_lag
(SELECT local_database, sys_extract_utc(systimestamp) current_local_ts, remote_database, (sys_extract_utc(systimestamp) – incoming_heartbeat_ts) in_hb_age,
incoming_routing_path is null
then (‘ ‘ || trim(incoming_extract) || ‘ ==> ‘ || trim(incoming_replicat) || ‘ ‘)
else (‘ ‘ || trim(incoming_extract) || ‘ ==> ‘ || substr(trim(incoming_routing_path),1,3734) || ‘ ==> ‘ || trim(incoming_replicat) || ‘ ‘) end) incoming_path,
(incoming_replicat_ts – incoming_heartbeat_ts) in_lag, (sys_extract_utc(systimestamp) – outgoing_heartbeat_ts) out_hb_age,
(case when outgoing_extract is null then null else (case when outgoing_routing_path is null
then (‘ ‘ || trim(outgoing_extract) || ‘ ==> ‘ || trim(outgoing_replicat) || ‘ ‘)
else (‘ ‘ || trim(outgoing_extract) || ‘ ==> ‘ || substr(trim(outgoing_routing_path),1,3734) || ‘ ==> ‘ || trim(outgoing_replicat) || ‘ ‘) end) end) outgoing_path,
(outgoing_replicat_ts – outgoing_heartbeat_ts) out_lag
FROM ggate.gg_heartbeat) ;
—End Heartbeat View—
I like the view that are pre-packaged with the heartbeat setup; however, I think there is still additional information that is not gathered based on personal preferences. All the other items that are created with the ADD HEARTBEATTABLE are pretty standard database items.
Overall, the new heartbeat functionality is decent, but leaving me a bit under-whelmed with functionality that I would like to see as part of the heartbeat process. Hopefully, Oracle in the near future will flesh out more options for the heartbeat table to make it more statistical gathering tool within the environment (i.e. DML/DDL deltas).
Filed under: Golden Gate
First we see the facelift in the Open-Dialog. It's now the typical Windows-Dialog, where you have much more flexibility.
New features in the Convert-Dialog:
- New Feature "Converting to and from XML"
- Checkbox "Overwrite"
- Checkbox "Keep window open", if you have to convert many files at once.
Preferences-Dialog, Tab "General":
- The checkbox "Hide PL/SQL Compile Dialog" is new
- Web Browser location (Forms 11g: Tab "Runtime")
- Compiler Output is new
Tab "Subclass": No changes
Tab "Wizards": No changes
- The checkbox "Show URL Parameters" is new
- Application Server URL is much bigger!
- The Web Browser location vanished to Tab "General"
Are you attending UKOUG Apps15 (#ukoug_apps15) or Tech15 (#ukoug_tech15)? If so, you are in luck! Once again we will run our ever popular scavenger hunt with a twist. From December 7-9 we will be at the ICC in Birmingham, UK working with the UKOUG team to give you a fun way to explore the conference and win prizes along the way.
If you’re not familiar with this game, it is as if we are stamping a card (or your arm!) every time you do a task. But instead, we are using IoT technologies; Raspberry Pi, NFC stickers, and social media to give you points. This time we will hold a daily drawing. You only need to enter once, but the more tasks you complete the more chances you have to win. Each day has different tasks.
This is a great way to discover new things, make friends, and enjoy the conference from a different perspective.
You can pre-register here http://bit.ly/UKOUG15Explorer or come over during registration so we can set you up. See you there!Possibly Related Posts:
- OTN Community Quest
- Our Week at UKOUG
- The Week That Was Kscope15
- Wondering about the Raspberry Pi
- Game Mechanics of a Scavenger Hunt
At this year’s Web Summit in Dublin, Ireland, I had the opportunity to observe thousands of attendees. They came from 135 different countries and represented different generations.
Despite these enormous differences, they came together and communicated.
But how? With all of the hype about how different communication styles are among the Baby Boomers, Gen Xers, Millennials, and Generation Zers, I expected to see lots of small groupings of attendees based on generation. And I thought that session audiences would mimic this, too. But I could not have been more wrong.
How exactly, then, did speakers, panelists, and interviewers keep the attention of attendees in the 50+ crowd, the 40+ crowd, and the 20+ crowd while they sat in the same room?
The answer is far simpler than I could have imagined: Authenticity. They kept their messages simple, specific, honest, and in context of the audience and the medium in which they were delivering them.
Web Summit: Estée Lalonde (@EsteeLalonde) in conversation at the Fashion Summit session "Height, shoe size and Instagram followers please?"
Simplicity in messaging was key across Web Summit sessions: Each session was limited to 20 minutes, no matter whether the stage was occupied by one speaker or a panel of interviewees. For this to be successful, those onstage needed to understand their brands as well as the audience and what they were there to hear.
Attention spans are shortening, so it’s increasingly critical to deliver an honest, authentic, personally engaging story. Runar Reistrup, Depop, said it well at the Web Summit when he said:
Web Summit: Runar Reistrup (@runarreistrup) in conversation during the Fashion Summit session "A branding lesson from the fashion industry"
While lots of research, thought, and hard work goes into designing and building products, today’s brand awareness is built with social media. Users need to understand the story you’re telling but not be overwhelmed by contrived messaging.
People want to connect with stories and learn key messages through those stories. Storytelling is the important challenge of our age. And how we use each social medium to tell a story is equally important. Storytelling across mediums is not a one-size-fits-all experience; each medium deserves a unique messaging style. As Mark Little (@marklittlenews), founder of Storyful, makes a point of saying, "This is the golden age of storytelling."
The Oracle Applications User Experience team recognizes this significance of storytelling and the importance of communicating the personality of our brand. We take time to nurture connections and relationships with those who use our applications, which enables us to empathize with our users in authentic ways.
Web Summit: Áine Kerr (@AineKerr) talking about the art of storytelling
The Oracle simplified user interface is designed with consideration of our brand and the real people—like you—who use our applications. We want you to be as comfortable using our applications as you are having a conversation in your living room. We build intuitive applications that that are based on real-world stories—yours—and that solve real-world challenges that help make your work easier.
We experiment quite a bit, and we purposefully “think as if there is no box.” (Maria Hatzistefanis, Rodial)
Web Summit: Maria Hatzistefanis (@MrsRodial) in conversation during the Fashion Summit session "Communication with your customer in the digital age"
We strive for finding that authentic connection between the simplified user interface design and the user. We use context and content (words) to help shape and inform what message we promote on each user interface page. We choose the words we use as well as the tone carefully because we recognize the significance of messaging, whether the message is a two-word field label or a tweet. And we test, modify, and retest our designs with real users before we build applications to ensure that the designs respond to you and your needs.
If you want to take advantage of our design approach and practices, download our simplified user experience design patterns eBook for free and design a user experience that mimics the one we deliver in the simplified user interface. And if you do, please let us know what you think at @usableapps.
This marks the 1000th post to the Infogram. I am awarding myself a low-carb lollipop.
Little things to know about ... Oracle Partitioning (part one of hopefully many), from The Data Warehouse Insider.
Oracle VM Performance and Tuning - Part 4, from Virtually All The Time.
Changing Appearances: Give The Apps Your Corporate Look, from Fusion Applications Developer Relations.
Offloading row level security to Smart Scan, from SmartScan Deep Dive.
Patch Set Update: Oracle Data Relationship Management 188.8.131.52.321, from Business Analytics - Proactive Support.
Oracle and Adaptive Case Management: Part 1 , from SOA & BPM Partner Community Blog.
Remote debugging of nashorn scripts with NetBeans IDE using "debugger" statements, from A. Sundararajan's Weblog.
From the Oracle E-Business Suite Support blog:
EBS HCM December Updates To Employer Shared Responsibility Reporting Under The Affordable Care Act
From the Oracle E-Business Suite Technology blog:
WebCenter Portal 184.108.40.206 Certified with E-Business Suite 12.2
We ran a five day Oracle Security training event in York, England from September 21st to September 25th at the Holiday Inn hotel. This proved to be very successful and good fun. The event included back to back teaching by....[Read More]
Posted by Pete On 22/10/15 At 08:49 PM
I wrote a presentation on designing and building practical audit trails back in 2012 and presented it once and then never again. By chance I did not post the pdf's of these slides at that time. I did though some....[Read More]
Posted by Pete On 01/10/15 At 05:16 PM
Oracle Application Express is a great rapid application development tool where you can write your applications functionality in PL/SQL and create the interface easily in the APEX UI using all of the tools available to create forms and reports and....[Read More]
Posted by Pete On 21/07/15 At 04:27 PM
How does Oracle Security and Electronic mix together? - Well I started my working life in 1979 as an apprentice electrician in a factory here in York, England where I live. The factory designed and built trains for the national....[Read More]
Posted by Pete On 09/07/15 At 11:24 AM