DBA Blogs

Tuning ASM Rebalance Operation when using FLASH drives

VitalSoftTech - Tue, 2016-01-12 06:25
When disks are added to a diskgroup, the data on all the disks are spread out evenly. One of last steps in this process is to COMPACT the data and place it on the outer tracks of the disks. This results in data being accessed faster due to the reduced seek time on the outer […]
Categories: DBA Blogs

David Bowie 1947-2016. My Memories.

Richard Foote - Mon, 2016-01-11 20:06
In mid-April 1979, as a nerdy 13 year old, I sat in my bedroom in Sale, North England listening to the radio when a song called “Boys Keep Swinging” came on by an singer called David Bowie who I never heard of before. I instantly loved it and taped it next time it came on the radio via my […]
Categories: DBA Blogs

Error when starting #GoldenGate Extract against MS SQL Server

DBASolved - Wed, 2016-01-06 16:10

If you work with Oracle GoldenGate long enough, you will eventually have to setup against a Microsoft SQL Server. Being that GoldenGate is a heterogeneous application, this isn’t a problem; however there are small differences. One such difference is how the exact/replicat will connect to the MS SQL Database.

In an Oracle-to-Oracle configuration, you would just use a command line the following from the command line:

GGSCI> dblogin useridalias [ alias name]
or
GGSCI> dblogin userid [ user name ] password [ password ]

In a MS SQL Server environment, you can still login at the GGSCI command prompt with the following:

GGSCI> dblgoin sourcedb [ dns ]

You will notice the difference, which is the use of an ODBC DNS entry. Although setting up the ODBC DNS entry is not the point of this post, just keep it in mind that is is required when connecting to MS SQL Server with Oracle GoldenGate.

After setting up the ODBC DNS, you will need to add the following to the extract/replicat parameter file to enable the process to connect to the database.

sourcedb [ dns ]

Note: I normally put my connection information in a macro to modularize my parameter files. Please it makes it easier if it needs to change.

MACRO #logon_settings
BEGIN
sourcedb [ dns ]
END;

Now, when you go to start the extract/replicat, you may get the following error:

ERROR OGG-00551 Database operation failed: Couldn’t connect to [ dns ]. ODBC error: SQLSTATE 37000 native database error 4060. [Microsoft][SQL Server Native Client 11.0][SQL Server]Cannot open database “db_name” requested by the login. The login failed.

The error message is a little bit misleading. It tells you that the process cannot connect to the database which you were able to connect to from the GGSCI command prompt with no issue. Why is this? The issue lies in the fact that the manager (MGR) process is running as a service and does not have the correct permissions to access the database from the service.

In searching MOS for this error, I was found Note ID: 1633138.1. In this note, notice that this issue is known as of Oracle GoldenGate version 12.1.2.x.x. The note also provides you a fix to this issue. In simple terms, since the manager process is running as a service; additional permissions have to be granted to manger.

To grant the SYSADMIN privilege for the manager process follow the below sequence of steps (on windows after all):

1. Manager is installed as service:
Open SQL Server Management studio -> Security ->login>select NT AUTHORITY\SYSTEM ->Right Click -> Properties–>Server Role –>Enable sysadmin role

2. ggsci>stop mgr

3. ggsci>start mgr

4. ggsci>start extract <extract-name>

After granting the sysadmin role, the extract will start.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Exporting solutions from #GoldenGate Studio

DBASolved - Tue, 2016-01-05 23:21

In doing some testing with Oracle GoldenGate Studio, I decided to create a test solution that can be moved from studio-to-studio. In order to move the test solution from studio-to-studio, it has to be exported first. This post will be about how to export a solution so it can be archived or shipped to a co-worker.

To export a solution, you will start in the Projects window. After opening the project, you will see a little red puzzle piece under the “Solutions”.

By right-clicking on the solution name, you are presented with a context menu that provides a few options for dealing with solutions within Oracle GoldenGate Studio. The option you are interested in, is at the very bottom of the context menu. This is the export option.

After selecting the “export” option, studio will open a small wizard that allows you to provide information and options for the solution that is to be exported. Everything on the export screen can be edited; however, the only thing that should not be changed is the “Advanced Options”. Provide a directory where the export should reside and provide an encryption key (optional).

When everything is filled out as you want, click “ok” and the export will be done. At the end of the export, should be pretty quick, you will receive a message saying that the export completed.

Once the export is completed, you will find the XML file in the directory you specified in the export wizard. This XML file can be opened up with any text editor and reviewed. A sample of the XML content is provided below.

The beauty in this XML file is that everything created in studio is contained within it. This makes it every simple and easy to email to co-workers or others if they want to see the architecture being worked on. Making collaboration on GoldenGate architectures easier.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Review: Oracle RAC Performance Tuning

RDBMS Insight - Sat, 2016-01-02 15:52

Some time ago, I received a free review copy of Brian Peasland‘s recent book, Oracle RAC Performance Tuning.

First, a note on my RAC background: I spent 7 years on Oracle’s RAC Support team. When customers had an intractable RAC performance issue, I was on the other end of the “HELP!” line until it was resolved.

I made Brian’s acquaintance through the MOS RAC Support forum, where Brian stood out as a frequent poster who consistently gave well-thought-out, correct and informative responses. So I had high expectations when I sat down with his book. And I haven’t been disappointed. This book is a terrific resource for single-instance DBAs looking to come up to speed on RAC. It’ll also be useful to more experienced RAC DBAs who want to deepen their knowledge or who just have a thorny performance problem to solve.

Many RAC books start out with an overview of RAC-specific physical architecture: the interconnect and the shared storage. Not this one. Brian leaps straight into what I consider the “hard” stuff: chapter 2 covers Cache Fusion and understanding RAC-specific wait events. I’ve spoken with many RAC DBAs who’d have a hard time telling me the difference between “gc cr block 2-way” and “gc current grant 3-way”. You really need to understand Oracle’s implementation of Cache Fusion to understand many of the RAC wait events, and Chapter 2 does a good job of explaining, using session tracing to step you through the waits. It might seem odd to start out with detailed explanations of wait events that many RAC DBAs will never see in the Top 10. But, a good understanding of Cache Fusion and the related wait events is really necessary to understand RAC-specific slowdowns. Subsequent chapters depend implicitly on this understanding: you can’t really understand interconnect tuning, for instance, unless you understand how the interconnect is used by Cache Fusion.

The book covers a full toolkit of testing utilities and tools as needed: Orion is introduced in the chapter on storage, and then a full chapter is devoted to the RAC Support Tools, another to AWR/ADDM/ASH, and another to benchmark utilities. There are also dozens of SQL scripts.

Another chapter to highlight is Chapter 14, a two-page summary at the end of the book that lists what Brian considers the central points. This is a mix of broad principles and RAC-specific “gotchas” that every RAC DBA should be aware of. I’d say that if you can read through Chapter 14 and say “I knew that” to each point, then you’ve got a good grasp of the essentials of RAC tuning.

Like others I’ve read in Burleson’s Oracle In-Focus series, this book would’ve benefited from a stronger copy editor. I was chagrined to see typos right on the back cover. But that’s a small quibble that doesn’t detract from an excellent book. If you’re a RAC DBA, this book deserves a place on your bookshelf.

Categories: DBA Blogs

LEVERAGE GEOGRAPHICALLY-DISTRIBUTED DEVELOPMENT

Kubilay Çilkara - Wed, 2015-12-23 14:44
As technology advances at warp speed, there are certain tech methodologies that will go by the wayside to make room for more advanced and efficient versions; and how development projects are managed is a case, in point.  Companies in every industrialized nation of the world are embracing Geographically-Distributed Development or GDD, which has embedded itself an impressive and proof-positive IT strategy model.  Outdated procedures that have been utilized for the administration of virtually any type of development project have been limited to one or several building sites.  That was then; this is now.

Let’s take a look at the advantages that GDD offers:
decreased labor expenses
increased availability to skilled resources
reduced time-to-market, with round-the-clock flexible staffing
The beauty of GDD is that is allows enterprises, regardless of location, to respond to changes in business circumstances as they happen.  Any feedback can be presented, instantaneously, within a global framework.

In order for GDD to achieve its vast benefit potential, major barriers that might impede an enterprise’s successes must be reduced to a minimum or entirely eliminated within the GDD strategy.   It is crucial that increased expenses associated with communication and coordination logistics that occur on an international level within a globally-distributed market, be uncovered and targeted.  If communication and coordination-specific expenses are allowed to flourish, the very benefits of GDD can be sorely compromised.  Various challenges must be reckoned with:  1) cultural variances 2) language differences and 3) inaccessibility to time-sensitive information.  These can all jeopardize the progress of distributed projects.

GDD is oblivious to location. it is an IT strategy model without borders.  This allows development team-members to work collectively and cohesively within a city, across state lines or beyond continents.  A site or sites might be engaged with one particular software-development venture while one or more outsourcing companies work, simultaneously, towards the projected goal.   Outsourcing companies would contribute their efforts and expertise, like a fine-tuned engine, within the software’s project-development cycle.  Optimized efficiency and cost savings, via structured and coordinated local or global team-work, becomes refreshingly realized.

With GDD, thorough and clear communication is established between all team members and project coordination.  Business demands incorporate global-sourcing, service-oriented architecture, new compliance regulations, new development methodologies, reduced release cycles and broadened application lifetimes.  Because of this, highly-effective, unencumbered communication is mission-critical; and a necessity arises that begs for a solution that has the power to:
Provide management visibility of all change activities among distributed development teams 
Integrate and automate current change processes and best practices within the enterprise
Organize the distribution of dependent change components among platforms and teams
Protect intellectual property

Track and authenticate Service Level Agreements (SLAs)
Engaging an organization to efficiently manage and significantly optimize communication among all stakeholders in the change process is a priceless component of an Application Lifecycle Management (ALM) solution.  Multiple GDD locales present inherent challenges:  language and cultural divides, varying software-development methods, change-management protocol, security employment, adherence to industry mandates and client business requisites.  The good news is the ALM solution tackles these hurdles with ease!

Provide Management Visibility of all Change Activities among Distributed Development Teams

When a centralized repository allows for the viewing of all the activities, communications and artifacts that could be impacted by the change process, you have beauty in motion; and this is what ALM does.  Via ALM, users have the luxury of effortlessly viewing project endeavors by each developer, development group or project team--irrespective of location, platform and development setting.  This type of amenity becomes especially striking when one begins to compare this model-type with other distributed environments where work-in-progress is not visible across teams due to niche teams employing their own code repositories.

ALM provides the opportunity for development managers to not only track, but validate a project’s standing.  A project’s status can be verified which helps to guarantee the completion of tasks.  User-friendly dashboards will alert management if vital processes indicate signs of sluggishness or inefficiency.

ALM ensures that the overall development objectives will be met on a consistent basis.  This is accomplished through the seamless coordination between both remote and local development activities.  The ALM-accumulated data plays a crucial role with boosting project management, status tracking, traceability, and resource distribution.  Development procedures can be continually improved upon, thanks to generated reports that allow for process metrics to be collected and assessed.  Also, ALM allows regulatory and best-practices compliance to be effortlessly monitored and evaluated.  Compliance deals with structuring the applicable processes and creating the necessary reports.  ALM executes compliance strategy and offers visibility to the needed historical information, regardless of users’ geographic locations.

Integrate and Automate Current Change Processes and Best Practices within the Enterprise

In a perfect world, each and every facet of a company’s application development would be super easy; and with ALM it is.  By way of ALM, companies can establish the defined, repeatable, measureable and traceable processes based on best practices, with absolute perfection.  User-friendly point-and-click set-up functions enable one to create a collection of authorized processes that automate task assignments and movement of application artifacts.

ALM permits the streamlining of change management by means of its simplicity when dealing with changes and necessary proceedings.  This in turn means changes can be analyzed and prioritized.  The approval management functions demand that official authorizations must be secured before any changes are permitted to go forth.  The ATM’s automated logging functions totally un-complicate the tracking of software changes.  This is huge since changes can be tracked from the time a request is received up to the time when a solution is submitted to production.

Every member that is part of the global development team would be duly notified regarding any required assignments as well as any happenings that would have an impact on their efforts.

Organize the Distribution of Dependent Change Components among Teams and Platforms

It’s no secret that when there are changes within just one system of a cohesive enterprise, those changes can impact other systems.  ALM offers multi-platform support which ensures that modifications made on disparate platforms, by way of geographically-dispersed teams can be navigated through the application lifecycle jointly.  A Bill of Materials Process, or BOMP, serves as an on-board feature that permits users to create file portfolios that incorporate characteristics from various platforms.  This means those portfolios can travel through the lifecycle as a unit.  Additionally, some ALM solutions absolutely ensure that the parts within the assemblies are positioned with the suitable platforms at each state of the lifecycle. 

Protect Intellectual Property

An ALM solution is the perfect component that allows for access and function control over all managed artifacts.  Managers are in a position to easily determine and authorize any access to vital intellectual property due to ALM functioning on a role-based control system.  The role-based structure means administrative operations are streamlined which permits any system administrator to bypass assigning individual rights to a single user.  Additionally, a role-based system delivers a clearly-defined synopsis of access rights between groups of individuals.

Track and Authenticate Service Level Agreements

The overall project plan, absolutely, must remain on schedule while administering accountability for established deliveries; and this can be easily realized through ALM’s ability to track and authenticate tasks and processes.  The ALM solution caters to satisfying Service Level Agreement (SLA) requirements within an outsourcing contract.  As a result, project management is enhanced by ensuring performance of specific tasks.  Optimizing the user’s ability to track emphasized achievements is made possible due to the consistency between tasks that have been assigned to developers and tasks that are part of the project plan.   Invaluable ALM-generated reports will track response and resolution times; and service-level workflows automate service processes and offer flexibility.  This translates into an acceleration of processes to the respective resources to meet project deadlines.  The ability to track performance against service-level agreements is made possible due to the availability of reports and dashboards that are at one’s fingertips.

Enhance Your Geographically-Distributed Development

As stated, ALM is beauty in motion; and aside from promoting perfected-levels of communication and coordination, it utilizes management strategies designed to plow through any obstructions that have the potential to compromise success.   ALM’s centralized repository is purposed to present multiple ideas, designs, dialogue, requirements, tasks and much more to team-members who would require or desire instant access to data.  Development procedures and tasks can be precisely and efficiently automated and managed due to ALM’s cohesive workflow capabilities.  Vital intellectual property, all of it is embedded and safeguarded in a central repository.  Due to this caliber of reinforced protection, loss and unauthorized access is null and void.  When remote software development is in-sync with local development, project management becomes seamless, highly-coordinated and error-free.  Integration of the monitoring, tracking and auditing of reports and dashboards means management can successfully satisfy project deadlines.  It would behoove any enterprise who wishes to reap the rewards of GDD to fully embrace ALM as its solution, it is truly mission-critical.

Application Lifecycle Management Solutions

Application lifecycle management programs are able to easily deliver a plethora of enterprise software modifications and configuration management facilities.  ALM solutions have the ability to support the needs of multiple groups of geographically-distributed developers.  Business process automation services, designated to automate and enforce on-going service delivery processes throughout enterprise organizations, is a vital component of ALM solutions.  Within  those groups of geographically-distributed developers, the product continues to reveal the magnitude of its talents since it:   targets permission-based assignment and enforcement services, caters to role-based interfaces which allows support for developers, software engineers, project managers, IT specialists, etc, delivers enterprise application inventory-management services, oversees and coordinates  large software inventories and configurations, guards user access, manages differing versions of application code,  supports the existence of concurrent development projects, coordinates a diversity of release management facilities. 

Mike Miranda is a writer concerning topics ranging from Legacy modernization to Application life cycle management, data management, big data and more



Categories: DBA Blogs

Fast, Flexible and Near-Zero Admin – Next Gen MySQL offerings with Google Cloud SQL

VitalSoftTech - Wed, 2015-12-16 00:22
As technology continues to advance, one of the greatest innovations within information technology has been the Cloud. Useful in businesses and organizations throughout the world, it has revolutionized the way information technology works. As a result, many companies can now focus more on building applications rather than configuring replications, applying patches and updates, and managing […]
Categories: DBA Blogs

Oracle Apex 5.0 and APEX_JSON

Kubilay Çilkara - Sat, 2015-12-12 04:09
How many lines of code does it take to make a web service call? Answer: 39

That is how many lines of PL/SQL I had to write in Oracle Apex 5.0 to make a web service call to an external API.

I used Adzuna's REST API to retrieve the latitude and longitude and the price of 2 bed properties for rent in a specific location in UK. The API returns JSON which the APEX_JSON package is able to parse easily. Adzuna is a property search engine which also provides aggregate data for properties in various countries around the world.

I think the native APEX_JSON package in Oracle Apex 5.0 is very useful and makes integrating web services to your Oracle Apex applications very easy. Here is application I have created in matter of hours which shows you average rent properties in a location of your choice in UK.

Here is the link for the app:  http://enciva-uk15.com/ords/f?p=174:1













And here is the code:


If you want to run this as is in your SQL Workshop, make sure you replace {myadzunaid:myadzunakey} in the code with your adzuna app_id and app_key which you can obtain from the Adzuna website https://developer.adzuna.com/ as I have removed mine from the code. They also have a very good interactive api documentation here http://api.adzuna.com/static/swagger-ui/index.html#!/adzuna


create or replace procedure get_rent_data(p_where in varchar2, p_radius in number, p_room in number)
is
v_resp_r clob;
j apex_json.t_values;
l_paths apex_t_varchar2;
v_id varchar(50);
v_lat decimal(9,6);
v_lon decimal(9,6);
v_rent number(10);

begin
-- http housekeeping
apex_web_service.g_request_headers(1).name  := 'Accept'; 
apex_web_service.g_request_headers(1).value := 'application/json; charset=utf-8'; 
apex_web_service.g_request_headers(2).name  := 'Content-Type'; 
apex_web_service.g_request_headers(2).value := 'application/json; charset=utf-8';

v_resp_r := apex_web_service.make_rest_request 
      ( p_url => 'http://api.adzuna.com:80/v1/api/property/gb/search/1' 
      , p_http_method => 'GET' 
      , p_parm_name => apex_util.string_to_table('app_id:app_key:where:max_days_old:sort_by:category:distance:results_per_page:beds') 
      , p_parm_value => apex_util.string_to_table('{myadzunaid:myadzunakey}:'||p_where||':90:date:to-rent:'||p_radius||':100:'||p_room||'') 
      );
-- parse json
apex_json.parse(j, v_resp_r);


-- start looping on json
l_paths := apex_json.find_paths_like (
        p_values         => j,
        p_return_path => 'results[%]',
        p_subpath       => '.beds',
        p_value           => '2' );
        
for i in 1 .. l_paths.count loop
       v_id := apex_json.get_varchar2(p_values => j, p_path => l_paths(i)||'.id'); 
       v_rent := apex_json.get_varchar2(p_values => j, p_path => l_paths(i)||'.price_per_month'); 
       v_lat := apex_json.get_varchar2(p_values => j, p_path => l_paths(i)||'.latitude');
       v_lon := apex_json.get_varchar2(p_values => j, p_path => l_paths(i)||'.longitude');

-- debug print values to page
 htp.p(v_id||'-'||v_lat||','||v_lon||'Rents : £'||v_rent);

end loop;

END;

Categories: DBA Blogs

Five Questions to Ask Before Purchasing a Data Discovery Tool

Kubilay Çilkara - Thu, 2015-12-10 17:14
Regardless of what type of industry or business you are involved in, the bottom-line goal is to optimize sales; and that involves replacing any archaic tech processes with cutting-edge technology and substituting any existing chaos with results-driven clarity.
Data discovery tools, being a business-intelligence architecture, creates that clarity through the incorporation of a user-driven process that searches for patterns or specific items in a data set via interactive reports.  Visualization is a huge component of data discovery tools.  One can merge data from multiple sources into a single data set from which one can create interactive, stunning dashboards, reports and analyses.  The user is able to observe data come to life via striking visualizations.  Furthermore, business users want to perform their own data analysis and reporting—with a data discovery tool they can!  After it’s all said and done, smarter business decisions are generated; and that drives results.
Before purchasing a data discovery tool, however, several questions should be addressed:

1: What About Data Prep?

It’s important to realize that there are companies who will claim their data-discovery products are self-service; but keep in mind that many of the products will necessitate a data prep tool in order to access the data to be analyzed.  Preparing data is challenging; and if a data prep tool is not included, one must be purchased.  Choose a data discovery tool that enables data prep to be handled without any external support.
As a side note:  governed self-service discovery provides easy access to data from IT; and an enterprise discovery platform will give IT full visibility to the data and analytic applications while it meets the business’s need for self-service.  Business users embrace the independence they are empowered with to upload and combine data on their own.  

2:  Is Assistance from IT Required for Data Discovery Tools?

Business users desire the ability to prepare their own, personal dashboards and explore data in new ways without needing to heavily rely on IT.  Data discovery tools do not require the intervention of IT professionals, yet the relationship with IT remains.  Data discovery tools empower the business to self-serve while maintaining IT stewardship.  Data discovery tools allow users to directly access data and create dashboards that are contextual to their needs—whenthey need it and how they need it!  This, in turn, reduces the number of requests for reports and dashboards from IT staff and allows those professionals to focus more intently on development projects and system improvements.  Software solutions that support data discovery, such as business intelligence platforms with innovative visualization capabilities, are zealously supported by non-technical business users since they can perform deep, intuitive analysis of any enterprise information without reliance on IT assistance.  
      
3:  Will Data Discovery Tools Allow One to Interact with the Data?

The fun thing is, you can play with the data to the point of being able to create, modify and drill down on a specific display.  A beautiful feature of data discovery tools is the interactive component which permits one to interact with corporate data sources visually to uncover hidden trends and outliers.  Data discovery facilitates intuitive, visual-based and multivariate analysis via selecting, zooming, pivoting, and re-sorting to alter visuals for measurement, comparisons and observation.

4:  Are Data Discovery Tools Intended for Enterprise Use?

Enabling the business to self-serve while maintaining IT stewardship creates reliable decisions the enterprise can rely on.  Data discovery tools are invaluable for enterprise use—organizations can plan their approach to incorporate data discovery tools into their infrastructure and business practice. 
Data discovery tools allow one to retrieve and decipher data from spreadsheets, departmental databases, enterprise data warehouse and third-party data sources more efficiently than ever!  Multidimensional information can be transformed into striking graphical representations—3D bar and pie charts, histograms, scatter plots and so much more!  Data discovery tools deliver enterprise solutions within the realms of business information and analytics, storage, networks & compliance, application development, integration, modernization and database servers and tools.  

5:  With Data Discovery Tools Can I Retrieve Answers At any Time?

Data discovery tools will allow you to make inquiries and get answers quickly and seamlessly.  Geographic location will make no difference since files can be loaded on a laptop or even a mobile phone or other mobile devices.  With a few clicks, you can unlock all your data from servers, a mainframe or a PC. 
Multiple charts, graphs, maps and other visuals can, all, be combined in analytic dashboards and interactive apps.  Answers to crucial questions and issues can be quickly unveiled.  Analysts can share the data, with ease, among all users via the web and mobile devices—all operating like a fine-tuned engine—anytime, anywhere.       

Data discovery tools are changing business intelligence!

Mike Miranda writes about enterprise software and covers products offered by software companies like rocket software about topics such as Terminal Emulation, Legacy Modernization, Enterprise Search, Big Data, Enterprise Mobility and more.
Categories: DBA Blogs

Top 13 Preferences to Tweak when using SQL Developer 4

VitalSoftTech - Mon, 2015-12-07 10:02
Support for customizing SQL Developer 4 continues to grow. Tweaking has made the platform extremely user-friendly. Of course, personalizing the platform requires a little research. But if you want to maximize SQL development, you need to explore the possibilities.
Categories: DBA Blogs

What to expect at the NextGen Cloud Conference and Why you should attend it?

VitalSoftTech - Wed, 2015-12-02 23:32
The NexGen Cloud Conference is an annual gathering of vendors, innovators and technology users that can help quickly come up to speed with the latest technologies and create value for you and your company.
Categories: DBA Blogs

Table Definitions in Oracle #GoldenGate #12c Trail Files

DBASolved - Sat, 2015-11-28 09:07

Oracle GoldenGate 12c (12.2.0.1.0) has changed the information that is stored in the trail files. All the standard information is still there. What Oracle changed has to do with the meta data that is used to define a table.

Note: If you want a understand how to use log dump and general trail information, look here.

Prior to 12.2.0.1.0 release of Oracle GoldenGate, if the column order of tables between source and target were different, you needed to generate a “definition” file using the “defgen” utility located in $OGG_HOME. This file allowed you to specify either a source or target definitions file which could be used to map the order of columns correctly. This was a nice tool when needed.

In 12.2.0.1.0, Oracle took this concept a little bit further. Instead of using a definitions file to do the mapping between source and target tables; Oracle has started to provide this information in the trail files. Review the image below, and you will see the table definition for SOE.ORDERS, which I run in my test environment.

Notice at the top, the general header information is still available for view. Directly under that, you will see a line that has the word “metadata” in it. This is the start of the “metadata” section. Below this is the name of the table and a series of number categories (keep this in mind). Then below this, is the definition of the table with columns and the length of the record.

A second ago, I mentioned the “numbered categories”. The categories correspond to the information defined to the right of the columns defined for the table. When comparing the table/columns between the database and trail file, as few things stand out.

In column 2 (Data Types), the following numbers correspond to this information:

134 = NUMBER
192 = TIMESTAMP (6) WITH LOCAL TIME ZONE
64 = VARCHAR2

In column 3 (External Length), is the size of the data type:

13 = NUMBER(12,0) + 1
29 = Length of TIMESTAMP (6) WITH LOCAL TIME ZONE
8 = VARCHAR2 length of 8
15 = VARCHAR2 length of 15
30 = VARCHAR2 length of 30

There is more information that stands out, but I’ll leave a little bit for you to decode. Below is the table structure that is currently mapped to the example given so far.

Now, you may be wondering, how do you get this information to come up in the logdump interface? Oracle has provided a logdump command that is used to display/find metadata information. This command is:

SCANFORMETADATA (SFMD)

There are a few options that can be passed to this command to gather specific information. These options are:

DDR | TDR
NEXT | INDEX

If you issue:

SCANFORMETADATA DDR

You will get information related to Data Definition Records (DDR) of the table. Information this provides includes the following output:

If you issue:

SCANFORMETADATA TDR

You will get information related to Table Definition Record (TDR) on the table. Information provide includes the output already discussed earlier.

As you can tell, Oracle has provided a lot of information that is traditionally in the definitions files for mapping tables directly into the trail files. This will make mapping data between systems a bit easier and less complicated architectures.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Why Data Virtualization Is so Vital

Kubilay Çilkara - Tue, 2015-11-24 16:35
In today’s day and age, it probably seems like every week you hear about a new form of software you absolutely have to have. However, as you’re about to find out, data virtualization is actually worth the hype.

The Old Ways of Doing Things

Traditionally, data management has been a cumbersome process, to say the least. Usually, it means data replication, data management or using intermediary connectors and servers to pull off point-to-point integration. Of course, in some situations, it’s a combination of the three.

Like we just said, though, these methods were never really ideal. Instead, they were just the only options given the complete lack of alternatives available. That’s the main reason you’re seeing these methods less and less. The moment something better came along, companies jumped all over them.
However, their diminishing utility can also be traced to three main factors. These would be:

  • ·      High costs related to data movement
  • ·      The astronomical growth in data (also referred to as Big  Data)
  • ·      Customers that expect real-time information
These three elements are probably fairly self-explanatory, but that last one is especially interesting to elaborate on. Customers these days really don’t understand why they can’t get all the information they want exactly when they want it. How could they possibly make sense of that when they can go online and get their hands on practically any data they could ever want thanks to the likes of Google? If you’re trying to explain to them that your company can’t do this, they’re most likely going to have a hard time believing you. Worse, they may believe you, but assume that this is a problem relative to your company and that some competitor won’t have this issue.

Introducing Data Virtualization

It was only a matter of time before this problem was eventually addressed. Obviously, when so many companies are struggling with this kind of challenge, there’s quite the incentive for another one to solve it.

That’s where data virtualization comes into play. Companies that are struggling with having critical information spread out across their entire enterprise in all kinds of formats and locations never have to worry about the hardships of having to get their hands on it. Instead, they can use virtualization platforms to search out what they need.

Flexible Searches for Better Results

It wouldn’t make much sense for this type of software to not have a certain amount of agility built in. After all, that’s sort of its main selling point. The whole reason companies invest in it is because it doesn’t get held back by issues with layout or formatting. Whatever you need, it can find.

Still, for best results, many now offer a single interface that can be used to separate and extract aggregates of data in all kinds of ways. The end result is a flexible search which can be leverage toward all kinds of ends. It’s no longer about being able to find any type of information you need, but finding it in the most efficient and productive way possible.

Keep Your Mainframe

One misconception that some companies have about data virtualization is that it will need certain adjustments to be made to your mainframe before it can truly be effective. This makes sense because, for many platforms, this is definitely the case. These are earlier versions, though, and some that just aren’t of the highest quality.

With really good versions, though, you can basically transform your company’s mainframe into a virtualization platform. Such an approach isn’t just cost-effective. It also makes sure you aren’t wasting resources, including time, addressing the shortcomings of your current mainframe, something no company wants to do.

Don’t get turned off from taking a virtualization approach to your cache of data because you’re imagining a long list of chores that will be necessary for transforming your mainframe. Instead, just be sure you invest in a high-end version that will actually transform your current version into something much better.

A Better Approach to Your Current Mainframe

Let’s look at some further benefits that come from taking this approach. First, if the program you choose comes with the use of a high-performance server, you’ll immediately eliminate the redundancy of integrating from point-to-point. This will definitely give you better performance in terms of manageability. Plus, if you ever want to scale up, this will make it much easier to do so.

Proper data migration is key to a good virtualization process. If it is done right the end user wont have to worry out corrupted data and communication between machines will be crystal clear.
If you divert the data mapping you need to do at processing-intensive level and transformation processes away from the General Purpose Processor of your mainframe to the zIIP specialty engine, you’ll get to dramatically reduce your MIPS capacity usage and, therefore, also reduce your company’s TCO (Total Cost of Ownership).

Lastly, maybe you’d like to exploit of every last piece of value you derive from your mainframe data. If so, good virtualization software will not only make this possible, but do so in a way that will let you dramatically turn all of your non-relational mainframe data virtualization into relational formats that any business analytics or intelligence application can use.

Key Features to Look for in Your Virtualization Platform

If you’re now sold on the idea of investing in a virtualization platform, the next step is getting smart about what to look for. As you can imagine, you won’t have trouble finding a program to buy, but you want to make sure it’s actually going to be worth every penny.

The first would be, simply, the amount of data providers available. You want to be able to address everything from big data to machine data to syslogs, distributed and mainframe. Obviously, this will depend a lot on your current needs, but think about the future too.

Then, there’s the same to think about in terms of data consumers. We’re talking about the cloud, analytics, business intelligence and, of course, the web. Making sure you will be able to stay current for some time is very important. Technology changes quickly and the better your virtualization process is the longer you’ll have before having to upgrade. Look closely at the migration process, and whether or not the provider can utilize your IT team to increase work flow. This will help you company get back on track more quickly and with better results.

Finally, don’t forget to look at costs, especially where scalability is concerned. If you have plans of getting bigger in the future, you don’t want it to take a burdensome investment to do so.
As you can see, virtualization platforms definitely live up to the hype.You just have to be sure you spend your money on the right kind.

Mike Miranda writes about enterprise software and covers products offered by software companies like Rocket software about topics such as Terminal emulation,  Enterprise Mobility and more.
Categories: DBA Blogs

Comment on The Collection in The Collection by lotharflatz

Oracle Riddle Blog - Mon, 2015-11-23 21:32

Hi Bryn,
thanks for replying. You are raising an important point here by suggesting to do the join in SQL rather than in PL/SQL. However as I wrote I wanted two loops rather than one. In your solution you are replacing the other loop with an IF checking for the change of the department number. I was aiming for the employees nested as a collection in the departments. Well, and You don’t need to bother to limit the bulk collect. (You can, if you like).

Like

Categories: DBA Blogs

Comment on The Collection in The Collection by Bryn

Oracle Riddle Blog - Mon, 2015-11-23 19:07

I had to tidy up the example to impose proper style (like adding “order by”) and to make it do something:

declare
cursor c1 is
select d.Deptno, d.Dname from Dept d order by d.Deptno;
cursor c2 (Deptno Dept.Deptno%type) is
select e.Empno, e.Ename from Emp e where e.Deptno = c2.Deptno order by e.Empno;
begin
for r1 in c1 loop
DBMS_Output.Put_Line(Chr(10)||r1.Deptno||’ ‘||r1.Dname);
for r2 in c2(r1.Deptno) loop
DBMS_Output.Put_Line(‘ ‘||r2.Empno||’ ‘||r2.Ename);
end loop;
end loop;
end;

It’s performing a left outer join using nested loops programmed in PL/SQL. Here is the output:

10 ACCOUNTING
7782 CLARK
7839 KING
7934 MILLER

20 RESEARCH
7369 SMITH
7566 JONES
7788 SCOTT
7876 ADAMS
7902 FORD

30 SALES
7499 ALLEN
7521 WARD
7654 MARTIN
7698 BLAKE
7844 TURNER
7900 JAMES

40 OPERATIONS

Here’s the SQL:

select Deptno, d.Dname, e.Empno, e.Ename
from Dept d left outer join Emp e using (Deptno)
order by Deptno, e.Empno

Programming a join in PL/SQL is one of the famous crimes of procedural guys who are new to SQL.

We can simply bulk collect this — using the “limit” clause if called for.

I have to assume that the “complex logic” does something for each row in the driving master Dept loop and then, within that, something for each child Emp row within each master. This is like the SQL*Plus “break” report of old. So is the question actually “How to program ‘break’ logic?”

Here you are:

declare
type Row_t is record(
Deptno Dept.Deptno %type not null := -1,
Dname Dept.Dname %type,
Empno Emp. Empno %type,
Ename Emp. Ename %type);
type Rows_t is table of Row_t index by pls_integer;
Rows Rows_t;
Prev_Deptno Dept.Deptno%type not null := -1;
begin
select Deptno, d.Dname, e.Empno, e.Ename
bulk collect into Rows
from Dept d left outer join Emp e using (Deptno)
order by Deptno, e.Empno;

for j in 1..Rows.Count loop
if Rows(j).Deptno Prev_Deptno then
DBMS_Output.Put_Line(Chr(10)||Rows(j).Deptno||’ ‘||Rows(j).Dname);
Prev_Deptno := Rows(j).Deptno;
end if;
if Rows(j).Empno is null then
DBMS_Output.Put_Line(‘ No employees’);
else
DBMS_Output.Put_Line(‘ ‘||Rows(j).Empno||’ ‘||Rows(j).Ename);
end if;
end loop;
end;

Here is the output:

10 ACCOUNTING
7782 CLARK
7839 KING
7934 MILLER

20 RESEARCH
7369 SMITH
7566 JONES
7788 SCOTT
7876 ADAMS
7902 FORD

30 SALES
7499 ALLEN
7521 WARD
7654 MARTIN
7698 BLAKE
7844 TURNER
7900 JAMES

40 OPERATIONS
No employees

Now tell me what in your question I’m failing to grasp.

Like

Categories: DBA Blogs

ORA-27101 Shared Memory Realm does not exist

VitalSoftTech - Thu, 2015-11-19 23:28
What is the cause of the "ORA-27101 Shared Memory Realm does not exist"? How do I resolve this?
Categories: DBA Blogs

Lost SYSMAN password OEM CC 12gR5

Hans Forbrich - Thu, 2015-11-19 10:42
I run my own licensed Oracle products in-house.  Since it is a very simple environment, largely used to learn how things run and verify what I see at customer sites, it is not very active at all.  But it is important enough to me to keep it maintained.

After a bit of a hiatus in looking at the OEM, which is at 12cR5 patched, I went back and noted that I was using the wrong password.  No problem, I thought: since OMS uses VPD and database security, just change the password in the database.

While I'm there, might as well change the SYSMAN password as well, since I have a policy of rotated passwords.

A few things to highlight (as a reminder for next time):


  • Use the right emctl.  There is an emctl for the OMS, the AGENT and the REPO DB.  In this case, I've installed the OMS under middleware, therefore  
    • /u01/app/oracle/middleware/oms/bin/emctl
  • Check the repository and the listener
  • Start the OMS.  
    • If the message is "Failed to connect to repository database. OMS will be automatically restarted once it identifies that database and listener are up." there are a few possibilities:
      • database is down
      • listener is down
    • If the message is "Connection to the repository failed. Verify that the repository connection information provided is correct." check whether 
      • SYSMAN password is changed or 
      • SYSMAN is locked out
  • To change the sysman password:
    • In database 
      • sqlplus / as sysdba
      • alter user SYSMAN identified by new_pwd account unlock;
    • In oms
      • ./emctl stop oms
      • ./emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd sys_pwd -new_pwd new_pwd
      • ./emctl stop oms 
      • ./emctl start oms
And test it out using the browser ...
Categories: DBA Blogs

How Terminal Emulation Assists Easy Data Management

Kubilay Çilkara - Wed, 2015-11-18 21:25
Just about every company will need terminal emulation at some point. Yours may not need it now, but as time goes on, the day will come when you need information that can only be accessed with an emulator. This software allows a computer to basically take on the functionality of an older version. Doing so makes it possible for the user to access data that would otherwise be impossible to find. If you’re not sold on the benefits of using this type of software, consider the following ways that it assists with making data management easier.

Obtain Data from Outdated Systems

This is the most obvious way a terminal emulator helps with data management. Right now, you could have all kinds of data out of reach because it was created with software you no longer used or stored on a platform that differs from the current one on your system.

Without an emulator, you’re basically without options. You simply have to find a workaround. The best solution would involve using a machine with the old operating system installed. This isn’t just extremely inconvenient though; it also isn’t very cost-effective and is sure to become a bottleneck sooner or later.

With terminal emulation, no data ever has to be out of reach. Whether its information from 10 years ago or 20, you can quickly retrieve whatever it is you need.

Access Multiple Terminals at Once

There’s far more these applications can do to assist with data management though. Over time, your company has probably—or will, someday—go through multiple platforms. This means that going back and grabbing the data you need could involve more than one system. If you tried using the aforementioned workaround, you’d be in for a huge challenge. It would take multiple computers, each with a specific operating system and then going between them to get the information you need or cross reference it as necessary.
Modern emulators can access as many terminals as you need all on the same computer. Usually, it just involves putting each one on separate tabs. Not only can you get all the info you need, then, you can do it from the screen all at once. This makes it extremely easy to cross reference data from one system against others.

Customize Your Queries
Another great benefit that comes with terminal emulation is that you can actually customize your searches to various degrees. For many companies, accessing old data means looking at screens that represent the info in the most rudimentary of ways. There may only be a few colors used for fonts on a black screen. Obviously, this can make data management a bit of a challenge, to say the least.
With the right software, though, you can control the font color, the character size, background and more. This makes it much easier to see the information you want, highlight it and otherwise keep track of the data. Never again suffer through old screens from outdated platforms when you need something.

Mobile Functionality

Like just about everything these days, terminal emulators have now gone mobile. You can now pull off emulation from just about anywhere in the world on your mobile device. This is a great way to make emulation possible for any work at home employees who may be working for your company. If you hire on a consultant or freelance professional from somewhere outside the office, mobile functionality means they can now benefit from easy data management. Next time you’re on a business trip and need to access information from yesteryear, the ability to do so will be at your fingertips.

Mobile functionality may not seem like the most important aspect to have as far as emulation is concerned, but it’s better to have the option there than wish it was possible.

Save Money

Data management can be a costly endeavor. Unfortunately, it’s one of those costs your company really can’t hope to avoid. You need to manage your data, so you better find room in your budget. With terminal emulation, you can at least save money on this essential process.

Like we brought up earlier, without an emulator, you’re looking at a lot of hardware in order to make sure vital data is never out of reach, to say nothing of how inconvenient that option would be. You’re also looking at the potential costs of losing that data if anything happens to your dedicated machines. Furthermore, added hardware always comes with extra costs. There’s the space you need, electricity, IT support and more.

In light of those added expenses, simply using emulation software should be a no-brainer. You don’t need any extra hardware and these platforms are designed to stay out of the way until you need them, so they won’t hurt your staff’s ability to use their current machines.

Limitless Scalability

No matter what kind of software we’re talking about, scalability always needs to be a concern. Today, your company may only have so much data to manage. A year from now, though, there’s going to be a lot more. Five years from now, your company’s collection of data will be expansive.
Obviously, managing this data efficiently is going to take a resource that can scale without difficulty. Keep in mind that a lot of companies increase their amount of data exponentially. This means scalability is necessary, but so is being able to scale at a moment’s notice to whatever size is needed.
Terminal emulation and scalability are virtually inseparable when you have the right software. No alternative solution is going to be able to keep up. Again, if you tried using hardware to manage your data, you can forget about easy scalability and forget about doing so without spending a lot of money in the process.


Data management needs to be a priority for every organization, no matter what industry they’re in. However, simple data management isn’t enough anymore. Instead, you need emulation software that will make it easy, cost-effective and scalable. Otherwise, your business will always be greatly limited in what information it can access and the benefits that could be derived from it.


Mike Miranda writes about enterprise software and covers products offered by software companies like Rocket software about topics such as Terminal emulation,  Enterprise Mobility and more.
Categories: DBA Blogs

Oracle SPARC M7 Processor Breaks the “Sacrifice Performance for Security” Rule and Much More

VitalSoftTech - Mon, 2015-11-16 04:20
Oracle Executive Vice President John Fowler used the occasion of Open World 2015 to introduce Oracle's SPARC M7 processor, the fastest in the world. Learn more ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs