Feed aggregator

VirtualBox 5.1.4

Tim Hall - Wed, 2016-08-17 09:04

VirtualBox 5.1.4 has been released.

Downloads and changelog in the usual places.

I’ve upgraded on my Windows 7 office PC. I’ll give it a go on El Craptian and Oracle Linux 6 when I get home.

Cheers

Tim…

Update. Just done the upgrade on my Mac and Linux server. Worked fine.

VirtualBox 5.1.4 was first posted on August 17, 2016 at 3:04 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Quick note on using nc (netcat)

Michael Dinh - Wed, 2016-08-17 07:43

Due to security hardening, telnet is not available.

$ rpm -q --whatprovides nc
nc-1.84-24.el6.x86_64

$ yum list nc
Loaded plugins: refresh-packagekit, security
epel/metalink                                                                                                           |  11 kB     00:00
epel                                                                                                                    | 4.3 kB     00:00
epel/primary_db                                                                                                         | 5.9 MB     00:03
public_ol6_UEKR3_latest                                                                                                 | 1.2 kB     00:00
public_ol6_latest                                                                                                       | 1.4 kB     00:00
Installed Packages
nc.x86_64                                                    1.84-24.el6                                                     @public_ol6_latest

$ nc -v -z -w 3 stackoverflow.com 80; echo $?
Connection to stackoverflow.com 80 port [tcp/http] succeeded!
0

Send SMS, MMS and Voice messages from Oracle PL/SQL

Jeff Kemp - Wed, 2016-08-17 02:31

testing receipt of sms and mms

If you need to send almost any message to almost any phone from your Oracle database, and you want to use straight PL/SQL, you may want to consider using my Clicksend API.

  • SMS (Short Message Service)
  • MMS (Multimedia Message Service)
  • Text to Voice

I have released the first beta version of my Oracle PL/SQL API for Clicksend. Read the installation instructions, API reference and download the release from here:

https://github.com/jeffreykemp/clicksend-plsql-api

Sending an SMS is as simple as adding this anywhere in your code:

begin
  clicksend_pkg.send_sms
    (p_sender  => 'Mr Tester'
    ,p_mobile  => '+61411111111'
    ,p_message => 'Hi, message sent at '
     || to_char(systimestamp,'DD/MM/YYYY HH24:MI:SS.FF')
    );
  clicksend_pkg.push_queue;
  commit;
end;

All you need to do is signup for a Clicksend account. You’ll only be charged for messages actually sent, but they do require you to pay in advance – e.g. $20 gets you about 300 messages (Australian numbers). You can get test settings so that you can try it out for free.

I’ve been using Clicksend for years now, and have been satisfied with their service and the speed and reliability of getting messages to people’s mobiles. When I encountered any issues, a chat with their support quickly resolved them, and they were quick to offer free credits when things weren’t working out as expected.

If you want to send a photo to someone’s phone via MMS (although I’m not sure what the use-case for this might be), you need to first upload the image somewhere online, because the API only accepts a URL. In my case, I would use the Amazon S3 API from the Alexandria PL/SQL Library, then pass the generated URL to the clicksend API. There is a file upload feature that ClickSend provides, I plan to add an API call to take advantage of this which will make this seamless – and provide some file conversion capabilities as well.

begin
  clicksend_pkg.send_mms
    (p_sender  => 'Mr Tester'
    ,p_mobile  => '+61411111111'
    ,p_subject => 'testing mms'
    ,p_message => 'testing '
     || to_char(systimestamp,'DD/MM/YYYY HH24:MI:SS.FF')
    ,p_media_file_url =>
'https://s3-ap-southeast-2.amazonaws.com/jk64/jk64logo.jpg'
    );
  clicksend_pkg.push_queue;
  commit;
end;

You can send a voice message to someone (e.g. if they don’t have a mobile phone) using the Text to Voice API.

begin
  clicksend_pkg.send_voice
    (p_phone_no     => '+61411111111'
    ,p_message      => 'Hello. This message was sent on '
     || to_char(sysdate,'fmDay DD Month YYYY '
                || '"at" HH:MI am, SS "seconds"')
     || '. Have a nice day.'
    ,p_voice_lang   => 'en-gb' -- British English
    ,p_voice_gender => 'male'
    ,p_schedule_dt  => sysdate + interval '2' minute
    );
  clicksend_pkg.push_queue;
  commit;
end;

You have to tell the API what language the message is in. For a number of languages, you can specify the accent/dialect (e.g. American English, British English, or Aussie) and gender (male or female). You can see the full list here.

All calls to the send_sms, send_mms and send_voice procedures use Oracle AQ to make the messages transactional. It’s up to you to either COMMIT or ROLLBACK, which determines whether the message is actually sent or not. All messages go into a single queue.

You can have a message be scheduled at a particular point in time by setting the p_schedule_dt parameter.

The default installation creates a job that runs every 5 minutes to push the queue. You can also call push_queue directly in your code after calling a send_xxx procedure. This creates a job to push the queue as well, so it won’t interfere with your transaction.

All messages get logged in a table, clicksend_msg_log. The log includes a column clicksend_cost which allows you to monitor your costs. To check your account balance, call get_credit_balance.

Please try it out if you can and let me know of any issues or suggestions for improvement.


Filed under: Oracle, PL/SQL Tagged: Clicksend, PL/SQL

Transformation between ROWs and COLs, then GROUP

Tom Kyte - Tue, 2016-08-16 19:26
Hi Team, Suppose that we have a table named TEST like this: SQL> SELECT * FROM TEST; WHO L W H -------- ---------- ---------- ---------- TOM <b>10 20 30</b> TOM <b>11 ...
Categories: DBA Blogs

Questions about the upper limit of records in one database block

Tom Kyte - Tue, 2016-08-16 19:26
Hi, team I was told that the last 3 characteristics of ROWID stands for the row number of database block. Theoretically, every database block could store as many as 65536 rows. I was wondering how many records could stores in one block (with the s...
Categories: DBA Blogs

Performance tuning for report query

Tom Kyte - Tue, 2016-08-16 19:26
Dear Tom, Thanks for your valuable info which is being provided by you for various questions on oracle to all the users which is also helping us in lot of ways... Now we have a problem with a report query which is almost taking 2 minutes 40 sec...
Categories: DBA Blogs

Fit for Work? Wearables in the Workplace Research Conclusions and Recommendations

Usable Apps - Tue, 2016-08-16 18:54

Oracle PartnerNetwork members innovating with Oracle Cloud services, emerging tech, and the Oracle Applications User Experience (OAUX) Glance, Scan, Commit design philosophy need to be aware of the factors that drive adoption of new ideas and how to best plan with their customers for a successful rollout of a solution.

“Wearables in the Workplace” is an outstanding piece of research now available in report form, thanks to postgrad students in the London School of Economics and Political Science.

Wearables in the Workplace Report

Wearables in the Workplace Research Report (Click to read).

The shaping of the research was guided by HCM Cloud strategy evangelist and UX champ Andy Campbell (@axcampbe), who told me how Oracle provides Masters in Management candidates with ideas, advice, access to information, and so on, for their final project. A project theme is agreed that is of use and interest to all parties, and then off they go!

This year's project theme was wearable technology in the workplace.

The resulting report based on qualitative and quantitative research methods offers some valuable insights. I encourage you to read the entire report, but for now, the main findings about wearables in the workplace are:

  • Yes, there is huge potential for wearable technology in the enterprise, but whereas the intention to use such devices by employees may be strong, the actual perseverance in doing so is weaker.
  • Most focus is on fitness and health-related (or "wellness" as our dear U.S. friends like to put it) uses for wearable technology in work.
  • People in work are very guarded about the data privacy issue and how their personal data gathered from such technology might be used. 
  • Incentives are a must to adoption of the technology. For example, employees respond positively to flexible working arrangements, benefits, and so on. 

 And the report's recommendations should be borne in mind too:  

  • Build upfront awareness amongst the workforce about what wearable technology can do. Yep, it takes time (Diffusion Theory). 
  • Employees should be offered a choice of whether they want to opt in or out of any wearable technology program in work. 
  • Health and wellbeing use cases are the spearhead to a wider adoption of wearables. Plus, there are direct HCM and business benefits related to these areas. 
  • Understand those users. Then, pick an incentive approach that fits their profile.

Wearable technology is an important part of the OAUX mobility strategy. The technology offers users the means to perform micro-transactions on-the-go using data in the Oracle Cloud that can then be used across other devices as the context suits, or to automate what they hate doing and augment what they love. Partners and customers together need to understand the solution use stakeholders and the adoption dynamics of the proposed solution so that they can design for it accordingly. And so, it was great to see this report presented to partners and customers at the HCM Cloud update day in Oracle's London office recently. The lessons were well received. 

Looking forward to more such collaborative research in the future!

If you have any feedback on the conclusions, recommendations, or adoption of wearable or emerging technology in the workplace, then find the comments box! 

Oracle JET and Application Container Cloud (with MCS Integration)

Andrejus Baranovski - Tue, 2016-08-16 13:44
I have deployed JET application with Mobile Cloud Service (MCS) integration into Oracle Application Container Cloud service. It works perfectly well, you can try to access it yourself (until my trial cloud account expires on 15th of August) by URL: https://rsjetnodejsapp-deoracleem99369.apaas.us2.oraclecloud.com.

Follow excellent blog article written by Lucas Jellema on details how to deploy JET into Application Container Cloud - Deploying an Oracle JET application to Application Container Cloud and running on Node.js.

First step is to generate empty Node.js application, you can do this from command line (install Node.js and express generator before):

express RSJETNodeJSApp

Navigate into generated application folder and add dependencies using npm:

npm install

Node.js application is created and you could run it by:

npm start

You could test JET app locally with the same command, as soon as you add JET content into Node.js app.

Once you access deployed JET application by URL: https://rsjetnodejsapp-deoracleem99369.apaas.us2.oraclecloud.com. You should get into Home tab, where static chart data is rendered (no integration with MCS yet):


I did a test to check how same screen looks on mobile device. JET renders perfectly, template is adjusted, as well as chart itself:


Go to People tab, this is where integration with MCS is done. I have implemented oAuth security authentication in JET, against MCS service (I will describe it in the next post). You can login with redsam/Wel_come1 user:


Same login screeen is rendered on mobile device nicely:


Once login is successful, JET app executes REST call to MCS endpoint and fetches data (originally returned by ADF BC REST service deployed on Java Cloud and accessed by MCS Connector):


Mobile device view doesn't dissapoint again, same page UI looks great:


JET implementation code is copied into Node.js structure public folder. You can open app in NetBeans, this would simplify editing:


I have deployed this app to Application Container Cloud by creating zip archive (make sure to create manifest.json file). Don't include root folder into archive, include application folders/files directly. Deployment is straightforward, create new Node.js application in Application Container Cloud:


Upload archive, give application name and wait few minutes to initialize. All done:


Here you can download complete example of Node.js app with JET content (and all libraries) I was using to deploy to Application Container Cloud - RSJETNodeJSApp.zip.

rlwrap with #GoldenGate GGSCI

DBASolved - Tue, 2016-08-16 10:45

Since I published posts on how to retrieve command history within GGSCI using the Oracle provided commands of “history” and “fc”, I was pinged a couple of times by some friends asking about “rlwrap” usage with GoldenGate. The answer is a simple yes, rlwrap can be used with Oracle GoldenGate.

What exactly is “rlwrap”?

According to the readme file at http://utopia.knoware.nl/~hlub/uck/rlwrap/#rlwrap, rlwrap is a ‘read-one wrapper’. Basically, it allows for the editing of keyboard input for any command.

How to install?

There are two ways to install “rlwrap”. The first way is manually, which requires you to run the configure and make commands; however, the easiest way I’ve found is by using a yum repository. Tim Hall (a.k.a Oracle-Base) documents this process quite effortlessly over at his blog. Thanks Tim!

Usage?

After you have “rlwrap” installed, you simply have to create an alias in your profile to use it. You can use it from the command line as well; however, the alias approach ensure that it is executed every time you run ggsci. The below example is what I have running in my .bash_profile for the Oracle user.

alias ggsci='rlwrap $OGG_HOME/ggsci'

The next time I login to use ggsci, rlwrap will automatically be ran and then I will be able to scroll through the commands I typed while in ggsci. Another nice thing about “rlwrap” is that when I logout of ggsci, and then back in, my command history is available still.

Although the “history” and “fc” commands are handy, it would be nice to see Oracle include “rlwrap” into the core product of Oracle GoldenGate …

Enjoy!

@dbasolved
http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Cloud Marpetplace part 2

Pat Shuff - Tue, 2016-08-16 09:31
The Oracle Cloud Marketplace is a location where customers can discover partner provided applications and services that complement the Oracle Public Cloud services. It is also a location where partners can list and promote cloud based applications and services that extend, integrate with, or build on Oracles Public Cloud services. This is an interesting concept that has various ways of joining customers to partners and giving partners view into what customers are looking at, downloading, and executing. There are over 1000 apps, both commercial and public domain compiled source, available from over 1500 partners and system integrators. Some applications are helper apps like CloudBerry Explorer that assists you in consuming cloud storage services. Other applications are Oracle provided deployments of E-Business Suite running on infrastructure as a service for development and test. The Marketplace has system integrators listed that specialize in cloud services. Deloitte is an example of a Cloud Elite Systems Integrator who provides a variety of consulting services to help build custom cloud solutions for customers.

For partners, there are tools that allow you to develop, market, and sell applications. The development allows you to build binaries that you can download to use as tools to access cloud services, bundles to upload to the cloud, or images that can be launched in the cloud. The tools also exist to help you market your application once it is built and uploaded. You can look at page views, who downloaded your application, and geographic data about the customers to help target marketing campaigns. There are also tools to help with revenue capture and making money from your application either through the Marketplace page or redirection to your own page. There are lead generation integrations into the partner portal to help with follow up and calling campaigns for customers that are downloading and using applications. Partners must be gold level or above and get this service for free by signing up through the Oracle Partner Portal. Partners also have access to BI reports to look at trending, usage, and utilization of the applications that they have listed. These reports are designed with partners like Bitnami who lists hundreds of public domain compiled images on the Oracle Cloud for compute resources. The reports help them look at the most popular download, packages that are trending, feedback from customers both positive and negative, as well as the age of packages and ones that might need updating.

Customers can search for applications based on key words, categories of applications, and company names. You do need to enable the Compute_Operations role before you deploy an image into the Oracle Cloud as well as going into the Properties tab at the top right of the Cloud Console and creating a linkage between the Marketplace and your cloud account.

In summary, tools exist for customers and partners to help link people who want to create applications with people who want to use applications. There is a review mechanism to help with feedback as well as notifications of updates and changes to applications. This tool is constantly being updated and changing so don't be surprised if the screen shots that you see here are a little different when you visit the pages.

HttpSessionListener with Spring Boot Application

Pas Apicella - Tue, 2016-08-16 07:55
I had a requirement to implement a HttpSessionListener in my Spring Boot application which has no web.xml. To achieve this I did the following

1. My HttpSessionListener was defined as follows
 
package com.pivotal.pcf.mysqlweb.utils;

import javax.servlet.http.HttpSession;
import javax.servlet.http.HttpSessionEvent;
import javax.servlet.http.HttpSessionListener;

import org.apache.log4j.Logger;

public class SessionListener implements HttpSessionListener
{
protected static Logger logger = Logger.getLogger("controller");
private HttpSession session = null;

public void sessionCreated(HttpSessionEvent event)
{
// no need to do anything here as connection may not have been established yet
session = event.getSession();
logger.info("Session created for id " + session.getId());
}

public void sessionDestroyed(HttpSessionEvent event)
{
session = event.getSession();
/*
* Need to ensure Connection is closed from ConnectionManager
*/

ConnectionManager cm = null;

try
{
cm = ConnectionManager.getInstance();
cm.removeConnection(session.getId());
logger.info("Session destroyed for id " + session.getId());
}
catch (Exception e)
{
logger.info("SesssionListener.sessionDestroyed Unable to obtain Connection", e);
}
}
}
2. Register the listener from a @Configration class as shown below<br />
  
package com.pivotal.pcf.mysqlweb;

import com.pivotal.pcf.mysqlweb.utils.SessionListener;
import org.springframework.boot.context.embedded.ServletListenerRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import javax.servlet.http.HttpSessionListener;

@Configuration
public class ApplicationSessionConfiguration
{
@Bean
public ServletListenerRegistrationBean<HttpSessionListener> sessionListener()
{
return new ServletListenerRegistrationBean<HttpSessionListener>(new SessionListener());
}
}
Thats all you have to do to achieve this


Categories: Fusion Middleware

Month End

Jonathan Lewis - Tue, 2016-08-16 07:04

A question about parallel query and cardinality estimates appeared on OTN a little while ago that prompted me to write this note about helping the optimizer do the best job with the least effort.  (A critical point in the correct answer to the original question is that parallel query may lead to “unexpected” dynamic sampling, which can make a huge difference to the choice of execution plans, but that’s another matter.)

The initial cardinality error in the plan came from the following predicate on a “Date dimension” table:


      AR_DATE.CALENDAR_DT   = AR_DATE.MONTH_END_DT 
AND   AR_DATE.CALENDAR_DT  >= ADD_MONTHS(TRUNC(SYSDATE,'fmyy'), -36) 
AND   AR_DATE.MONTH_END_DT >= ADD_MONTHS(TRUNC(SYSDATE,'fmyy'), -36)

In the parallel plan the estimated number of rows on a full tablescan of the table was 742, while on the serial plan the same tablescan produced a cardinality of 1. You will appreciate that having an estimate of 1 (or less) that is nearly three orders of magnitude wrong is likely to lead to a very bad execution plan.

My first thought when I saw this was (based on a purely intuitive interpretation): “there’s one day every month that’s the last day of the month and we’re looking at roughly that last 36 months so we might predict a cardinality of about 36”. That’s still a long way off the 742 estimate and 1,044 actual for the parallel query, but it’s a warning flag that the serial estimate is probably an important error – it’s also an example of the very simple “sanity checking” mental exercises that can accompany almost any execution plan analysis.

My second thought (which happened to be wrong, and would only have been right some time well before version 10.2.0.5) was that the optimizer would treat the add_months() expressions as unknown values and assign a selectivity of 5% to each of the predicates, reducing the combined selectivity to 1/400th of the selectivity it gave to the first predicate. In fact the optimizer evaluates the expressions and would have used the normal (required range / total range) calculation for those two predicates.

It’s the first predicate that I want to examine, though – how does the optimizer calculate a selectivity for it ? Here’s some code to generate sample data to use for testing.


rem
rem     Script:         month_end.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2016
rem

create table t1
nologging
as
select
        rownum                                                   id,
        to_date('01-Jan-2010','dd-mon-yyyy') + rownum - 1       calendar_date,
        add_months(
                trunc(to_date('01-Jan-2010','dd-mon-yyyy') + rownum - 1 ,'MM' ),
                1
        ) - 1                                                   month_end_date
from
        dual
connect by
        level <= trunc(sysdate) - to_date('01-jan_2010','dd-mon-yyyy') + 1
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

This clunky bit of code gives me consecutive dates from 1st Jan 2010 up to “today” with the month_end_date column holding the month end date corresponding to the row’s calendar_date. So now we can check what the optimizer makes of the predciate calendar_date = month_end_date:


set autotrace on explain

select count(*) from t1 where calendar_date = month_end_date;

  COUNT(*)
----------
        79

Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    16 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |    16 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |    16 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("CALENDAR_DATE"="MONTH_END_DATE")

Looking at operation 2 we can see that, in effect, the optimizer has considered two independent predicates “calendar_date = {unknown}” and “month_end_date = {unknown}” and taken the lower of the two selectivities – which means the cardinality estimate is 1 because the calendar_date column is unique across this table.

There are various ways to work around problems like this. One of the simplest would be to tell Oracle to sample this table with the (table-level) hint /*+ dynamic_sampling(t1 1) */; in fact, since this predicate is effectively treated as an examination of two predicates the (cursor-level) hint /*+ dynamic_sampling(4) */ would also cause sampling to take place – note that level 4 or higher is required to trigger sampling for “multiple” predicates on a single table. As a general guideline we always try to minimise the risk of side effects so if this problem were embedded in a much larger query I would prefer the table-level hint over the cursor-level hint.

There are other options, though, that would allow you to bypass sampling – provided you can modify the SQL. The script I used to create this table also included the following statement:


alter table t1 add (
        date_offset1 generated always as (calendar_date - month_end_date) virtual,
        date_flag generated always as (case when calendar_date - month_end_date = 0 then 'Y' end) virtual
);

In 12c I would declare these virtual columns to be invisible to avoid problems with any SQL that didn’t use explicit column lists. For demonstration purposes I’ve set up two options – I can find the rows I want with one of two obvious predicates:

    date_offset1 = 0
    date_flag = 'Y'

In fact there’s a third predicate I could use that doesn’t need to know about the virtual columns:

    calendar_date - month_end_date = 0

Unfortunately I can’t arbitrarily swap the order of the two dates in the last predicate, and the optimizer won’t spot that it is also equivalent to “calendar_date = month_end_date”. Here are a few execution plans – for which the only significant bit is the cardinality estimate of the full tablescans:


select count(*) from t1 where date_flag = 'Y';

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     2 |     4  (25)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     2 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |    79 |   158 |     4  (25)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("DATE_FLAG"='Y')



select count(*) from t1 where date_offset1 = 0;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     4 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     4 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |    78 |   312 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("DATE_OFFSET1"=0)



select count(*) from t1 where calendar_date - month_end_date = 0;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     4 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |     4 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |    78 |   312 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."DATE_OFFSET1"=0)

It’s interesting to note that the optimizer has transformed the last predicate into the equivalent virtual column expression to do the arithmetic. You might also note that the date_flag option is slightly more accurate, but that’s because it’s based on an expression which is null for the rows we don’t want while the date_offset1 column has a value for every row and a little bit of arithmetical rounding comes into play. You might also note that there’s a small cost difference – which I’d ascribe to the CPU cost that the optimizer has added for the CASE expression being applied on top of the simple date arithmetic.

Of course, whatever else you might play around with when working around a cardinality problem like this, I think the strategic aim for a data warehouse system would be to get a REAL flag column on the table and populate it at data loading time if month-end dates played an important part in the proceedings – though I have to say that the virtual flag column is one I do rather fancy.

 


dbms_parallel_execute.run_task error when parallel_level > 1

Tom Kyte - Tue, 2016-08-16 01:06
Hi, I am trying to use dbms_parallel_execute to execute a procedure in multiple threads. when the value of parallel_level = 1, the below code just works fine, but when the value of parallel_level > 1 then this below code fails with the error shown b...
Categories: DBA Blogs

Eliminating rows on condition

Tom Kyte - Tue, 2016-08-16 01:06
I have a table for Vehicle owners as follows. The table has 3 columns, Customer id, Vehicle Identification number, and whether the customer is a Primary or Secondary driver on the vehicle. Customer_ID, Vehicle_VIN, Relationship 0001, 12345678, P...
Categories: DBA Blogs

Function rendom_id

Tom Kyte - Tue, 2016-08-16 01:06
HI We do have store procedure and CURSOR is looking for random id per month.We can see result like one visit from 08/may/2016 2 - 09/may/2016 5 - 11/may/2016 20 - 12/may/2016 2 - 18/may/2016 Can you tell me why 20 visits get selected from...
Categories: DBA Blogs

Can I send messages on mobile phone from Oracle database?

Tom Kyte - Tue, 2016-08-16 01:06
Hi TOM, We are running an application on Oracle 8.1.5 on solaris and oas 4.081. We are using basic oracle users for authentication. We want to develop a new functionality in our system by which we will be able to send messages on the user's mobile...
Categories: DBA Blogs

Lost all Redo log file

Tom Kyte - Tue, 2016-08-16 01:06
Hi Tom, I am practising Recovery . CASE 1)I have Development TEST DB in NON-ARCHIVE Mode. NEVER backed up. All Redo log files including the active one ,are dropped from the OS. How to recover such a DB. I dont mind loosing Transacti...
Categories: DBA Blogs

5 Stages of the Software Development Cycle

Complete IT Professional - Mon, 2016-08-15 16:46
This is a guest post by Saurab Prabhakar. The Software Development Lifecycle (SDLC) defines a revolution. It is fundamentally a sequence of steps that display a model for the development and lifecycle management of certain software or even an application. The procedure can differ from business to business, but the standards remain firm. The software development […]
Categories: Development

Getting to know Bob Rhubart, OTN Architect Community Manager

OTN TechBlog - Mon, 2016-08-15 11:58

Oracle ACE Mike Riley, asks OTN Architect Community Manager, Bob Rhubart about his podcasts, 2MTT videos, and about community.oracle.com. 

<p> </p>

Edit #GoldenGate commands from GGSCI

DBASolved - Mon, 2016-08-15 10:45

Ever get tired of typing the same command over and over again in Oracle GoldenGate GGSCI? Well, Oracle has provided a history recall function into GGSCI, I talked about his in an earlier post. This post on history recall can be found here.

In this post, lets take the command recall and how these commands can be edited. In Oracle GoldenGate 12c, Oracle has provided an GGSCI command called “FC”. The “FC” command is used to display/edit a previously issued GGSCI command and the execute it again. This command leverages the memory buffer the same was as the history command does within GGSCI.

Now the syntax for using this command is as follows:

FC [ n | -n | string ]

n – Displays the command from the line number provided

-n – Displays the command that was issued n lines before the current line

string – Displays the last command that starts with the specified string

Let’s take a look at an example of using “FC” within GGSCI.

In the below output, I have issued a series of commands and then listed these commands using this history function.

GGSCI (fred.acme.com) 6> history

GGSCI Command History

    1: info all
    2: stats extract egg12c
    3: info mgr
    4: info pgg12c
    5: info all
    6: history

Using the “FC” command, I can edit and execute the edited command. If you take a look at command 4 (info pgg12c), I want to replace “pgg12c” with “egg12c”. this is done by using the “r” edit command as follows.

GGSCI 7> fc 4
GGSCI 7> info pgg12c
GGSCI 7..     regg12c
GGSCI 7> info egg12c
GGSCI 7..

EXTRACT    EGG12C    Last Started 2016-08-12 17:06   Status RUNNING
Description          'Integrated Extract'
Checkpoint Lag       00:00:08 (updated 00:00:05 ago)
Process ID           24082
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2016-08-15 10:14:45
                     SCN 0.51017268 (51017268)

By executing “fc 4” (fc n), I’m telling GGSCI to pull the fourth command from the memory buffer for editing. Then I use the “r” command to provide the text that I want to replace in the command. Notice the position of the “r”, it is lined up directly under the string I want to replace. Then I provide the replacement string. Before execution GGSCI provides you with an output of the changed command. After verifying the command, the command can be executed providing the output for the correct process.

Note: For more editing options with FC, you can find these here.

If you want to know what command was executed n commands before the previous command, you can use the -n option to “FC”. This makes the “FC” command act like the history command but only displays the command at the -n position. This can be seen if you do a history command prior to a fc -n command.

GGSCI (fred.acme.com) 11> history

GGSCI Command History

    2: stats extract egg12c
    3: info mgr
    4: info pgg12c
    5: info all
    6: history
    7: info egg12c
    8: info mgr
    9: history
   10: info egg12c
   11: history

GGSCI (fred.acme.com) 12> fc -4
GGSCI (fred.acme.com) 12> info mgr
GGSCI (fred.acme.com) 12..

Manager is running (IP port fred.acme.com.15000, Process ID 12377).

You will notice, that I have eleven commands in the command history. By using “fc -4”, I’m able to retrieve the info mgr command and then execute it by simply by hitting return. Before hitting return, I could still edit the command as was shown previously.

Lastly, you can retrieve a previous command by searching for a specific string. The string has to be at the beginning of the command. In the below example, I’m searching for the last stats command that was ran.

GGSCI (fred.acme.com) 16> history

GGSCI Command History

    7: info egg12c
    8: info mgr
    9: history
   10: info egg12c
   11: history
   12: info mgr
   13: info all
   14: history
   15: stats extract egg12c
   16: history


GGSCI (fred.acme.com) 17> fc sta
GGSCI (fred.acme.com) 17> stats extract egg12c
GGSCI (fred.acme.com) 17..

Sending STATS request to EXTRACT EGG12C ...

No active extraction maps
DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
	Operations                		           2.00
	Mapped operations         		           2.00
	Unmapped operations         		           0.00
	Other operations         		           0.00
	Excluded operations         		           0.00

As you can see, there is only one stats command in the command history. The “FC” command pulls this command and provides an opportunity to edit it. After hitting return, the command is executed.

Now that you know how to recall and edit Oracle GoldenGate commands from the GGSCI command prompt, hopefully, this will make your life with Oracle GoldenGate a bit easier.

Enjoy!!

@dbasolved
http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator