Feed aggregator

OEL 7 – How to disable IPv6 on Oracle Linux 7 – Follow Up

Yann Neuhaus - Fri, 2017-02-17 02:56

This is a follow up to the Blog were I explained how to disable IPv6 on Oracle Linux 7.

If you have done all the steps which I have explained here http://blog.dbi-services.com/oel-7-how-to-disable-ipv6-on-oracle-linux-7/  then you have already IPv6 successfully disabled. However, some tools require some special attention afterwards if you want to avoid some ugly warning or error messages. There are so many tools that can use IPv4 and IPv6, but it is impossible to mention all of them. I will just dig a little deeper into the following 4.

  • Postfix
  • Oracle
  • NFS
  • rsyslogd
Postfix

Let’s start with Postfix. This might be one of the first warning messages you see, in case you have disabled IPv6 on your system. If you receive the following warning message when you try to send an email, then you need to adjust your /etc/postfix/main.cf file.

$ mailx -s "Test" xxx.xxx@xxx.com
Test
.
EOT
$ send-mail: warning: inet_protocols: IPv6 support is disabled: Address family not supported by protocol
send-mail: warning: inet_protocols: configuring for IPv4 support only
postdrop: warning: inet_protocols: IPv6 support is disabled: Address family not supported by protocol
postdrop: warning: inet_protocols: configuring for IPv4 support only

The solution is to configure your /etc/postfix/main.cf file to allow only the ipv4 protocol.

[root@SVPCHODAC01 sbin]# /usr/sbin/postconf | grep inet_protocols
inet_protocols = all
/usr/sbin/postconf: warning: inet_protocols: IPv6 support is disabled: Address family not supported by protocol
/usr/sbin/postconf: warning: inet_protocols: configuring for IPv4 support only

[root@SVPCHODAC01 sbin]# cd /etc/postfix/
[root@SVPCHODAC01 postfix]# cp main.cf main.cf.20170203a
[root@SVPCHODAC01 postfix]# vi main.cf

Change “inet_protocols = all”  to “inet_protocols = ipv4″ and then restart PostFix.

[root@SVPCHODAC01 postfix]# /etc/init.d/postfix restart
Shutting down postfix: [ OK ]
Starting postfix: [ OK ]

[root@SVPCHODAC01 postfix]# /usr/sbin/postconf | grep inet_protocols
inet_protocols = ipv4

That’s it. Now the ugly Postfix warning messages disappear.

Oracle

The next candidate is the Oracle Listener. In some situations,  you might see the following error message in your listener.log file when working with Cloud Control 12c.

TNS-01189: The listener could not authenticate the user

This is related to an Oracle bug, to be more precise, it is “BUG 16054202 – TNLIN EXTRACTS WRONG SUBNETMASK FOR IPV6 ADDRESSES”. The bug can be fixed by configuring the Oracle Listener to work with IPv4 only. This is done via the listener.ora IP parameter, which knows the following options.

IP=FIRST

Listen on the first IP address returned by the DNS resolution of the host name.
If the user wants the listener to listen on the first IP to which the specified host name resolves,
then the address must be qualified with (IP=first).

IP=V4_ONLY

Listen only on IPv4 addresses.

IP=V6_ONLY

Listen only on IPv6 addresses.

Simply put the (IP=V4_ONLY) after your PORT setting, and then restart the listener like shown in the following example.

-- listener.ora
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = dbidg03)(PORT = 1521)(IP=V4_ONLY))
    )
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

-- restart

$ lsnrctl stop LISTENER; lsnrctl start LISTENER

Now the messages “TNS-01189: The listener could not authenticate the user” in the listener.log should disappear.

 

NFS

Under normal circumstances, no changes should be required for NFS unless you had proto=tcp6 configured for your mount options. If so, then your mount will not work anymore.

[root@dbidg02 etc]# mount /u99
mount.nfs: an incorrect mount option was specified

And you will see the following error in the /var/log/messages file.

Feb 14 10:26:48 dbidg02 kernel: NFS: server address does not match proto= option

Now you could either remove the proto option or change it to proto=tcp.

For NFS version 4 you have the following options:

proto=netid The netid determines the transport that is used to communicate with the NFS server. Supported options are tcp, tcp6, and rdma. tcp6 use IPv6 addresses and is only available if support for TI-RPC is built in. Both others use IPv4 addresses.

In my case, I have added the proto=tcp option to my NFS mount table in the /etc/fstab

#-- NFS mounts
dbidg03:/u99   /u99  nfs  vers=4.1,proto=tcp,rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,timeo=600    

-- And now the mount works perfectly again.

[root@dbidg02 etc]# mount /u99
[root@dbidg02 etc]#
[root@dbidg02 etc]# mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
dbidg03:/u99 on /u99 type nfs4 (rw,relatime,vers=4.1,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.202,local_lock=none,addr=192.168.56.203)

Now the NFS mount works again.

rsyslogd

Almost the same applies to the rsyslogd. In case you have not specified “-6″ in your syslogd options, you are fine. If not, you need to either remove the option or replace it with “-4″

oracle@dbidg03:/etc/sysconfig/ [oms13c] rpm -qa  | grep rsyslog
rsyslog-7.4.7-16.0.1.el7.x86_64

-- from the doc
 -4  Causes rsyslogd to listen to IPv4 addresses only.  If neither -4 nor -6 is given, rsyslogd listens to all configured addresses of the system.
[root@dbidg03 sysconfig]# cat rsyslog
# Options for rsyslogd
# Syslogd options are deprecated since rsyslog v3.
# If you want to use them, switch to compatibility mode 2 by "-c 2"
# See rsyslogd(8) for more details
SYSLOGD_OPTIONS="-4"

[root@dbidg03 sysconfig]# systemctl restart rsyslog
[root@dbidg03 sysconfig]#
Conclusion

There might be some tools on your system that requires special attention after you have disable IPv6 on your system.

 

 

Cet article OEL 7 – How to disable IPv6 on Oracle Linux 7 – Follow Up est apparu en premier sur Blog dbi services.

Webcast: "Ready or Not: Applying Secure Configuration to EBS"

Steven Chan - Fri, 2017-02-17 02:06

Applying Secure Configuration Oracle University has a wealth of free webcasts for Oracle E-Business Suite.  If you're looking for our latest updates on the EBS Security front, see:

  • Ready or Not: Applying Secure Configuration to Oracle E-Business Suite (26 minutes)

It's a new world - one where secure configuration is no longer optional and you must reduce your attack surface.  Eric Bing, Senior Director Product Development, shares that going forward, many Oracle E-Business Suite security features will now be turned on by default. To further assist you with deploying Oracle E-Business Suite securely, we are now providing a Secure Configuration Management console.  Under certain conditions, access to Oracle E-Business Suite will be limited until your Applications DBA or System Adminstrator corrects or acknowledges the errors and warnings in the console.  Come to this session to learn about the new Secure Configuration Management console and our guidelines for auditing, monitoring and securing your Oracle E-Business Suite environment and sensitive data. This material was presented at Oracle OpenWorld 2016.

Categories: APPS Blogs

Links for 2017-02-16 [del.icio.us]

Categories: DBA Blogs

Weekly Link Roundup – Feb 17, 2017

Complete IT Professional - Fri, 2017-02-17 00:39
Here’s a collection of interesting articles I’ve read this week. Articles I’ve Read Truncate 12c https://jonathanlewis.wordpress.com/2017/02/16/truncate-12c/ Jonathan Lewis writes an article about some improvements to the TRUNCATE statement in Oracle 12c, and how it relates to “on delete cascade”. THere’s a small example and a good explanation on how it works.   Step by Step […]
Categories: Development

Analytics apps or embedded analytics

Dylan's BI Notes - Thu, 2017-02-16 21:02
An analytics apps should be an application.   We should not see it as a report solution or just a way to view or download data. A best analytics apps is an apps, and lets people forget about it is an “analytics” apps. If you keep the end user in mind and drive the design of […]
Categories: BI & Warehousing

In Real Estate it's Location, Location, Location - In Startups it's all about Sales, Sales, Sales

Bradley Brown - Thu, 2017-02-16 18:32
Sure, you could argue that marketing is important or execution or support or product quality, but when it comes down to what you really need to focus on, it's all about sales.  You have to have a high quality product to sell, you certainly need to be able to support it, and it has to be scaleable, but if you don't have customers, you won't have a business.

I've built a number of software as a service (SaaS) businesses from the ground up.  It's hard work!  One of my favorite best practices, like in the lean startup is getting to your MVP (minimum viable product) as quickly as you can.  It's great to hear prospects saying no to certain things and yes to others.  Once you have an MVP and your initial customers, you know exactly where to improve your product, what requires your support and ultimately how to sell to more customers as well as how to sell more to your existing customer base.

Drumfire Sales Search (www.drumfiress.com)  solves the sales equation for you with process and people.  In this blog post, I've outlined 6 businesses that we've worked with and the approach we took from start to finish.  Generally speaking, the outline of our approach/process is here.

This video walks through the process with a short explainer video:


GAME PLAN
Step 1 is always the assessment.  It's important to build out the game plan, which often involves the following steps.  Your business might know and understand your value proposition.  You might know exactly who your ideal / target customer is too.  This assessment is key to your sales success.  Our typical steps to establish the game plan include:

1. Value proposition
2. Ideal/Targeted customer
3. Identify decision makers and method
4. Prospecting strategy
5. Go to market plan
6. Establish pricing
REQUIRED TOOLS
Tools are required to sell your product or service.  Once you know who you're selling to, you need get your message down.  As Mark Twain said, it takes a lot of time to write a short letter, just as it takes a lot of time to get your elevator pitch down.  Summarizing what you do isn't easy.  How quickly can you summarize what you do?  In my days of owning an Oracle consulting (or professional services) firm, my early message was very complicated and confused my early prospects.  As time went on, we shortened our elevator pitch.  In the end, it was pretty simple - we were "Oracle Experts."  I would often say to the CEO of a company, "if you hear someone in your company mention Oracle and a challenge they are having, please call me."  That message worked.  Having the proper materials is important as is having a CRM that allows you to track and monitor your sales activity.  Some of the tools we help you develop include:

1. Develop elevator speech
2. Marketing, messaging and presentations
3. Select and customize appropriate CRM
4. Identify individual sales tools
5. Sales training

At Drumfire, we have our own proprietary solution that will help seed your prospect base if your customers are businesses.  It's called NorthStar and it's evolved over the last 20 years.  Back in the early days of NorthStar, I used it to find companies who were running Oracle's web server.  Today it's used to find businesses in any business category and then to read through all of the websites pulling key information about competitors, specific terms, email addresses and more.  There will be more information on Northstar in the examples below.
TEAM
Drumfire's origins revolve around the staffing industry.  You will have short and long term requirements around sales in your business.  You might need to establish regions, which we can help with.  Maybe you're taking your first products to market and you need to find those first team members that can sell your MVP.  Team makeups change over time.  Here are some of the steps we help you with around your team:

1. Determine long term hiring plan to include regions and levels
2. Identify candidate makeup
3. Job descriptions and comp plans 
4. Have quotas in place for every level
5. Go to market recruitment plan

EXECUTION
Focus is your number one priority.  Scalable execution is key to your success.  Success can easily hide a lack of focus and execution.  In my Oracle consulting business, when we didn't know and understand our pipeline (in our early days), we got caught with a lack of backlog.  When the "cliff" hits your business, it can be a real surprise.  We help you understand the importance of having strong visability to your pipeline.  We also feed your pipeline with prospects so you constantly have new prospects to call on.

1. Create pipeline report
2. Integrate and align sales objectives with corporate objectives
3. Present to potential clients
4. Negotiate
5. Deliver

NorthStar
As mentioned above, NorthStar is our software that finds businesses matching your target customer criteria.

Here are a couple of videos that explain NorthStar in a bit more detail:


Here's a specific example of NorthStar in action for a company with an irrigation solution like Nest:


And one last video, which explains it for an imaginary new pool product:



Let me explain the approach we take to help you find your customers.  I've provided examples for 6 different business segments:

1. Staffing Firms
    a. Healthcare
    b. Oil and Gas
2. Real Estate Infomercial
3. Cannabis Regulatory Compliance Business
4. Nest for Irrigation
5. Virtual Yoga Business
6. Rehab Marketing Firm

Sure, we have databases full of companies and email addresses, but we typically search the web "right now" to find your prospects and then we read through every single website looking for keywords, competitors, and more.  This takes time and resources.  It can take as little as a couple of days and as much as a month.  And...NorthStar continues to run regularly to find new prospects for you month after month.  You'll get access to your NorthStar dashboard and the full platform as you'll see below.

Staffing FirmsLet's say you own (or run sales/bus dev for) a staffing firm.  You probably know exactly what types of companies you're calling on every day.  How do you find/identify those companies?  Do you care about the industry they are in?  Revenue metrics?  Number of employees?  Most staffing firms specialize in a niche from IT to engineering to sales to you name it.  Let's look at 2 examples of companies that we used NorthStar to find potential clients.  These prospects get fed right into the CRM and MAS systems.  What's most important to you?  Do you need a phone number?  An email address?  The head of HR?

Healthcare Staffing
How many companies in the US are based around healthcare?  What are the categories that they group themselves into?

We ran NorthStar looking for businesses in "healthcare." We find these businesses "now" (i.e. it's not a canned list of businesses - it changes every single day.  We harvest them by zip code.  Pulling the tip 10% of the zip codes (which is typically covers about 80% of the businesses) we found about 200,000 (195,670 to be exact) businesses around the US that identify themselves as being in healthcare.  They group themselves into a number of categories (many are grouped into multiple categories) as you see here:

CATEGORYPROSPECTSDoctor & Clinic24575Home Health Care Service & Supply13820General Practice Medicine;Doctor & Clinic;Neurology9322Doctor & Clinic;Neurology7311Car Insurance;Insurance;Life Insurance;Home Insurance5554Home Health Care Service & Supply;Doctor & Clinic4876Long-Term & Hospice Care4826Senior Service;Long-Term & Hospice Care2624Home Health Care Service & Supply;Long-Term & Hospice Care2619Home Health Care Service & Supply;Business Alarm Systems2520

It's interesting to see that the top group are Doctors and Clinics - they are just more than 10% of the companies we found.  When you put this data onto a map, it's very interesting to see how your prospects are distributed across the US.  You might have preconceived notions about where your prospects are.  Are most of them in the states with the most people (NY, CA, IL) or are they not?  Here's the distribution by state - notice how low NY is!

STATEPROSPECTSCA19659FL15172TX15078IL9867NC9666OH9076NY8317

Here's another look at the data on a map:




We are able to pull a lot of important information about each of your prospects, which includes:
  • Company Name
  • Full Address (Address, City, State, Zip)
  • Phone Number
  • Latitude and Longitude
  • Ratings and Reviews
  • Website


We use the website's URL to spider through their website to determine things such as:


  • Email Addresses on their website
  • Competitor Mentions
  • Specific Terms on their website
  • Social Media Platforms they are on (Facebook, Twitter, Pinterest, Google Plus, etc)
  • Is their website mobile enabled?
  • Do they have a "current" (HTML5) website?
  • Do they mention your company?


There is SO much information that we can glean from a company's website!  Use your imagination and let NorthStar do the work!  This client asked us to get them a list of companies specializing in revenue cycle management.  That's an easy term to search for.  We added "back office" into our search terms too since that's sometimes used.  The category did provide us with just over 300 people in the billing category.  These terms are even more effective at narrowing down your prospects!

Here's what their dashboard looks like:


Our clients (hopefully you're one of them) can drill into information any way you can dream of.  


Water, Oil and Gas Engineer Staffing


Another client asked us to find all of the power plants in the US that mention gas turbines on their website or they mention the M501J gas turbine.  That's getting pretty specific isn't it!  If you want to know exactly who you should be calling on based on talent you currently have available or based on a demand surge, how else could you find this type of information?

Here's a look at their dashboard that we provided them access to:



You can see that Texas has the most power plants (over 2,800) and Oklahoma is next with about 800 and then Colorado weighs in at 565.  We loaded the 22,083 power plants into their database (with over 2,400 email addresses and over 13,000 websites of which over 2,600 of them were mobile or HTML5 websites and 1,700 had social media) and they were off to the races.  We found about 50 power plants that mentioned gas turbines so they focused on those 50 first.  We only found 1 power plant that mentioned the M501J, but 1 is better than none!  Not every concept we come up with is going to pan out, but more pan out than not!

Real Estate Infomercial

The "Direct Response" (DR) world is an interesting world full of infomercials geared at getting to consumers, typically to buy their goods or services.  The real estate world has had a number of people who focused on the DR space - many of them "bad" (i.e. they just wanted your money, not to really help you).  A new entre into the DR space approached us about their business.  One side is the B2C (business to consumer) side, which infomercials and other solutions like StarStar are excellent at helping.  The other side of the business is that they plan to deliver their services through existing Realtors.  In fact, the Realtors could be customers.  Real estate agencies are registered businesses that we can identify with NorthStar.

We ran a test to see how many agents we would find in the US.  After processing about 25% of the zip codes (15,072 to be exact - out about 45k zips), we found 225,379 companies.  177,979 of them have "Real Estate Agent" in the category.  Here are the top 10 categories:

CATEGORYCOUNT(*)Real Estate Agent71837Real Estate Agent;Commercial Property Management35252Real Estate Agent;Real Estate22403Real Estate6501Real Estate;Real Estate Agent4169Real Estate Agent;Property Management;Commercial Property Management3741Property Management;Commercial Property Management2255Real Estate Agent;Other Real Estate;Commercial Property Management2100Real Estate Agent;Property Management2096Real Estate Financing1976

We figured that the Realtors that would move interest these guys are those who talk about REO, Fix and Flips, Short Sales, Investors, Buy Back, etc. on their website, so we searched for these terms.  Sure enough, 4,295 of those businesses mention these terms.  It's always important to be able to triage the businesses that you're going to call on first.  If you say they have to mention one of these terms AND they need to promote their social media, the list is reduced to 2,362 agents of which only 1,925 have a mobile or current (HTML5) website.  Multiply that number by 4 (since we've only processed about 25% of the country) and they will end up with about 8k agents to call on.  That's nearly impossible without an autodialer like ConnectAndSell - after all, we have phone numbers for every single agent!

Here's what the distribution looks like across the US:



Cannabis Regulatory Compliance BusinessOne of my good friends is in this business.  They help the medical and recreational cannabis businesses keep current with their paperwork.  We searched 3 different categories (cannabis, alternative clinics, and marijuana to find these businesses.  We then searched each site for terms such as cannabis and marijuana.  The most important piece of data for them at this point is an email address.  They also wanted to know how many shops we could find in the US, but they wanted to start in Colorado.  Overall we found 6,447 email addresses.  Limiting it down to Colorado, we came up with only 274 records.  They said that calling doesn't work (the workers are too stoned), so an email is critical for them.
Here's a breakdown of shops around the US:


Nest for IrrigationOne of the first clients to use NorthStar was a Denver-based company that has an irrigation (sprinkler system) timer like Nest (for the home thermostat).  They had a theory that California, Colorado, Arizona, and Illinois would have the most irrigation specialists (sprinkler guys).  Here's how it actually panned out:


So California, Florida and Texas are the biggest states.  Colorado, Arizona, Illinois, not so much.  Setting up sales people across the country would likely require at least one person in each of these states.

We also established a NorthStar scoring system from 1-16.  Points were established for their social media presence, mentioning their competitors, mentioning them, if they talk about "low voltage" and more.  Their dashboard allows them to move the dials to view the irrigation distribution across the country based on their NorthStar score.

Here's what the US looks like when we search out irrigation companies that have a NorthStar score of 8 or more that also mention "low voltage."  It looks like we just removed most of the west coast doesn't it!


If we reduce the minimum score to 1 and remove the low voltage requirement, the US is very well covered:


Virtual Yoga Business
As many of you know, prior to this venture, I was in the subscription and transactional video on demand business.  We had a number of clients in the fitness business.  Subscription businesses are recurring revenue businesses that keep on giving (i.e. generating revenue) once you get customers signed up.  In fact, if you have a $10/mo product and you add 10 new customers a day for a year, your ARR will be over $400k with a low churn rate.  That's powerful!Yoga businesses are typically recurring revenue businesses - people pay month after month.  So why not create a subscription Yoga business that you can attend right from home?  Why build your own Yoga studio to this?  Why not just put cameras in existing Yoga studios and allow them to post their live video sessions for anyone in the world (who has a subscription) to attend the class?
Again, using NorthStar to find existing Yoga studios is easy enough!  Maybe they want to see if the studio already has a video or DVD for sale.  What else might they want to look for?  Hopefully you're getting the picture here.  NorthStar can help you triage your calling list to determine who to call and in what order. 

Rehab Marketing Firm

A marketing firm specializing in providing referrals for rehab centers, asked us to find the "good guys" in this business.  Many "bad guys" enter this business and scam people for money.  It's always difficult to imagine that families are tricked in their worst hours of darkness.  It's easy enough to identify rehab centers around the US.  However, they decided that they know the top 10 firms in the country that they are going to focus on first and then they will come back to us.  We appreciate a focused sales approach. 

Summary
As I stated in the title, it's all about sales in the end.  The more sales you have, the more revenue you have and ultimately the more success you'll have.  In the services business, each person that's billable is additional revenue.  In the product business, each new sale is additional revenue.  Let's go get some business together!  Give us a call at 303.223.4304 or visit us on the web.  Or send us a note (through the website).  We would LOVE to work with you and be successful together!


Return blob from remote database

Tom Kyte - Thu, 2017-02-16 17:46
Hi, I have a question on how to get Blob from remote database with out using Temporary table and MVIEW. Can you please provide a way to deal with blob.
Categories: DBA Blogs

Data copy from Staging to Application table

Tom Kyte - Thu, 2017-02-16 17:46
Hi Team, Can you pleas eheve alook ate below scenario and suggest any better alternative to achive it I have Staging table - Tb_Staging_0001 Columns - Card_No, Address_1, .. Address_5, Credit_amount, Debit_amount, .... so on.. And Applicati...
Categories: DBA Blogs

How to apply for a job with Oracle

Tom Kyte - Thu, 2017-02-16 17:46
I need a job in oracle. How would I apply?
Categories: DBA Blogs

Commit more aften

Tom Kyte - Thu, 2017-02-16 17:46
Hi ask tom team. what is the effect on Oracle Database if i commit a lot in a loop like following. <code> begin for i in 1..10000000 loop commit; end loop; end; </code>
Categories: DBA Blogs

"Next" Clause when creating a tablespace

Tom Kyte - Thu, 2017-02-16 17:46
Hello, I was recently wondering exactly what the limits of the "next" clause are in a create table space statement. If I create a tablespace with the "Next" clause value of 50mb and maxsize unlimited (with autoextend on), does that mean that the m...
Categories: DBA Blogs

The Cost is too high while selecting the records from the Staging Table named as JDSU_RPR_ORDER_HEADERS_STG

Tom Kyte - Thu, 2017-02-16 17:46
<code>The table JDSU_RPR_ORDER_HEADERS_STG has currently has 584436 recrods. So while i am invoking records from this table.it is giving me cost up to 8,505 SELECT NVL (MAX (TO_NUMBER (stg_row_id)), -1), NVL (MAX (TO_N...
Categories: DBA Blogs

Cannot Create Fast Refreshable On-Commit MV using new SQL JOIN Syntax

Tom Kyte - Thu, 2017-02-16 17:46
The subject says it all, really. Using the old-style comma joins, succeeds. Trying to create a fast refreshable on-commit materialized view using the new SQL JOIN syntax fails. The docs. do not expect such behaviour. Any guidance on the mat...
Categories: DBA Blogs

Optimizer Dynamic Sampling at OPTIMIZER_DYAMIC_SAMPLING level 2

Tom Kyte - Thu, 2017-02-16 17:46
At my Oracle 12.1.0.2 Oracle Standard Edition database, at OPTIMIZER_DYNAMIC_SAMPLING = 2 which is the default, I see Dynamic Sampling is happening for some of the SQLs for which all tables already have already gathered table statistics. Due to this,...
Categories: DBA Blogs

Comma separated search and search with checkboxes in Oracle APEX

Dimitri Gielis - Thu, 2017-02-16 16:30

When you have a classic report in Oracle Application Express (APEX) and want to make it searchable you typically add a Text Item in the region, enable Submit on Enter and add a WHERE clause to your SQL statement.

Here’s an example:

Classic Report with Search (text item)

Your SQL statement probably looks like this:

select CUSTOMER_ID,
CUST_FIRST_NAME,
CUST_LAST_NAME,
CUST_STREET_ADDRESS1,
CUST_CITY,
CUST_STATE,
CUST_POSTAL_CODE,
CUST_EMAIL,
CREDIT_LIMIT
from DEMO_CUSTOMERS
where CUSTOMER_ID = :P4_SEARCH

When you want to search for multiple customers separated by a comma, how do you do that?
So in my search field I add for example: 1,2,3 and expect to see 3 customers.

There’re a couple of options you have, I’ll list three below:

  1. INSTR

    where INSTR(','||:P4_SEARCH||',', ',' || CUSTOMER_ID || ',') > 0
  2. REGEXP_LIKE

    where REGEXP_LIKE(CUSTOMER_ID, '^('|| REPLACE(:P4_SEARCH,',','|') ||')$')
  3. REGEXP_SUBSTR

    where customer_id in to_number((
    select regexp_substr(:P4_SEARCH,'[^,]+', 1, level)
    from dual
    connect by regexp_substr(:P4_SEARCH, '[^,]+', 1, level) is not null
    ))

Which one to choose? It depends what you need… if you need readability, maybe you find INSTR easier to understand. If you need performance, maybe the last option is the better choice… so as always it depends. If you want to measure the performance you can look at the Explain Plan (just copy the SQL in SQL Workshop and hit the Explain tab).

The Explain Plan for the first SQL looks like this:

Explain Plan INSTR

The Explain Plan for the last SQL looks like this:

Explain Plan REGEXP_SUBSTR

The above technique is also useful when you use want checkboxes above your report, so people can make a selection. For example we select the customers we want to see:

Classic Report with checkbox selection

The where clause would be identical, but instead of a comma (,) you would use a colon (:), so the first statement would be:

where INSTR(':'||:P4_SEARCH||':', ':' || CUSTOMER_ID || ':') > 0

Happy searching your Classic Report :)

Categories: Development

Simple Python for Oracle database tuning example

Bobby Durrett's DBA Blog - Thu, 2017-02-16 16:14

I ran across a stackoverflow question and it gave me an idea for a simpler use of Python to graph some Oracle database performance information. I looked at my PythonDBAGraphs scripts and I’m not sure that it is worth modifying them to try to simplify those scripts since I like what they do. But they may make people think that Python scripts to graph Oracle performance data are difficult to write.  But, I think if someone just wants to put together some graphs using Python, Matplotlib, and cx_Oracle they could do it more simply than I have in my PythonDBAGraphs scripts and it still could be useful.

Here is an example that looks at db file sequential read waits and graphs the number of waits per interval and the average wait time in microseconds:

import cx_Oracle
import matplotlib.pyplot as plt
import matplotlib.dates as mdates

connect_string = "MYUSER/MYPASSWORD@MYDATABASE"
con = cx_Oracle.connect(connect_string)
cur = con.cursor()

query="""
select sn.END_INTERVAL_TIME,
(after.total_waits-before.total_waits) "number of waits",
(after.time_waited_micro-before.time_waited_micro)/
(after.total_waits-before.total_waits) "ave microseconds"
from 
DBA_HIST_SYSTEM_EVENT before, 
DBA_HIST_SYSTEM_EVENT after,
DBA_HIST_SNAPSHOT sn
where before.event_name='db file sequential read' and
after.event_name=before.event_name and
after.snap_id=before.snap_id+1 and
after.instance_number=1 and
before.instance_number=after.instance_number and
after.snap_id=sn.snap_id and
after.instance_number=sn.instance_number and
(after.total_waits-before.total_waits) > 0
order by after.snap_id
"""

cur.execute(query)

datetimes = []
numwaits = []
avgmicros = []
for result in cur:
    datetimes.append(result[0])
    numwaits.append(result[1])
    avgmicros.append(result[2])
        
cur.close()
con.close()

title="db file sequential read waits"

fig = plt.figure(title)
ax = plt.axes()

plt.plot(datetimes,numwaits,'r')
plt.plot(datetimes,avgmicros,'b')

# Format X axis dates

fig.autofmt_xdate()
ax.fmt_xdata = mdates.DateFormatter('%m/%d/%Y %H:%M')
datetimefmt = mdates.DateFormatter("%m/%d/%Y")
ax.xaxis.set_major_formatter(datetimefmt)

# Title and axes labels

plt.title(title)
plt.xlabel("Date and time")
plt.ylabel("num waits and average wait time")

# Legend

plt.legend(["Number of waits","Average wait time in microseconds"],
loc='upper left')

plt.show()

The graph it produces is usable without a lot of time spent formatting it in a non-standard way:

It is a short 68 line script and you just need matplotlib and cx_Oracle to run it. I’ve tested this with Python 2.

Bobby

Categories: DBA Blogs

Dashboard vs Report

Dylan's BI Notes - Thu, 2017-02-16 15:00
A dashboard is a data driven application.  It is important to think about: Who is using this?  User Persona.  Different people may have different level of accountabilities and thus have different interests. When the user will need to use? How often? What business process the information will need to be available? What is the best […]
Categories: BI & Warehousing

SQL Server Failover Cluster on Linux and synchronizing uids/gids across nodes

Yann Neuhaus - Thu, 2017-02-16 13:13

In my previous blog post, I talked about SQL Server on Linux and high availability. During my test, I used a NFS server to share disk resources between my cluster nodes as described in the Microsoft documentation. A couple of days ago, I decided to add a fourth node (LINUX04) to my cluster infrastructure and I expected to do this work easily. But no chance, I faced a problem I never had before on this infrastructure.

blog 117 - 01 - sqlfcionlinux - archi

Switching over this last node led to a failed SQL Server FCI resource. After digging into the problem, I found out the root from the SQL Server error log as shown below:

[mikedavem@linux04 ~]$ sudo cat /var/opt/mssql/log/errorlog
2017-02-12 18:55:15.89 Server      Microsoft SQL Server vNext (CTP1.2) - 14.0.200.24 (X64)
        Jan 10 2017 19:15:28
        Copyright (C) 2016 Microsoft Corporation. All rights reserved.
        on Linux (CentOS Linux 7 (Core))
2017-02-12 18:55:15.89 Server      UTC adjustment: 0:00
2017-02-12 18:55:15.89 Server      (c) Microsoft Corporation.
2017-02-12 18:55:15.89 Server      All rights reserved.
2017-02-12 18:55:15.89 Server      Server process ID is 4116.
2017-02-12 18:55:15.89 Server      Logging SQL Server messages in file 'C:\var\opt\mssql\log\errorlog'.
2017-02-12 18:55:15.89 Server      Registry startup parameters:
         -d C:\var\opt\mssql\data\master.mdf
         -l C:\var\opt\mssql\data\mastlog.ldf
         -e C:\var\opt\mssql\log\errorlog
2017-02-12 18:55:15.91 Server      Error: 17113, Severity: 16, State: 1.
2017-02-12 18:55:15.91 Server      Error 2(The system cannot find the file specified.) occurred while opening file 'C:\var\opt\mssql\data\master.mdf' to obtain configuration information at startup. An invalid startup option might have caused the error. Verify your startup options, and correct or remove them if necessary.
2017-02-12 18:55:15.91 Server      SQL Server shutdown has been initiated

 

Well, the error speaks for itself and it seems I’m concerned by a file access permission in my case. My first reflex was to check the corresponding permissions on the corresponding NFS folder.

[mikedavem@linux04 ~]$ sudo ls -lu /var/opt/mssql/data
To.al 53320
drwxr-----. 2 995 993     4096 Feb 14 23:12 lost+found
-rwxr-----. 1 995 993  4194304 Feb 14 23:19 master.mdf
-rwxr-----. 1 995 993  2097152 Feb 14 23:19 mastlog.ldf
-rwxr-----. 1 995 993  8388608 Feb 14 23:19 modellog.ldf
-rwxr-----. 1 995 993  8388608 Feb 14 23:19 model.mdf
-rwxr-----. 1 995 993 13959168 Feb 14 23:19 msdbdata.mdf
-rwxr-----. 1 995 993   786432 Feb 14 23:19 msdblog.ldf
drwxr-----. 2 995 993     4096 Feb 14 23:08 sqllinuxfci
-rwxr-----. 1 995 993  8388608 Feb 14 23:19 tempdb.mdf
-rwxr-----. 1 995 993  8388608 Feb 14 23:19 templog.ldf

 

According to the output above we may claim this is a mismatch issue between uids/guids of the mssql user across the cluster nodes. At this stage, I remembered performing some tests including creating some linux users before adding my fourth node leading to create a mismatch for the mssql user’s uids/gids. Just keep in mind that the SQL Server installation creates a mssql user by default with the next available uid/gid. In my case uid and guid.

Let’s compare mssql user uid/gid from other existing nodes LINUX01 / LINUX02 and LINUX03:

[mikedavem@linux04 ~]$ id mssql
uid=997(mssql) gid=995(mssql) groups=995(mssql)
[mikedavem@linux04 ~]$ ssh linux01 id mssql
…
[root@linux04 ~]# ssh linux01 id mssql
uid=995(mssql) gid=993(mssql) groups=993(mssql)
…
[root@linux04 ~]# ssh linux02 id mssql
uid=995(mssql) gid=993(mssql) groups=993(mssql)
…
[root@linux04 ~]# ssh linux03 id mssql
uid=995(mssql) gid=993(mssql) groups=993(mssql)

 

Ok this explains why I faced this permission issue. After investing some times to figure out how to get rid of this issue without changing the mssql user’s uid/guid, I read some discussions about using NFS4 which is intended to fix this uids/gids mapping issue. It seems to be perfect in my case! But firstly let’s just confirm I’m using the correct NFS version

[mikedavem@linux04 ~]$ mount -v | grep nfs
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
192.168.5.14:/mnt/sql_data_nfs on /var/opt/mssql/data type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.5.14,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.5.14)

 

Well, my current configuration is not ready to leverage NFS4 yet and some configuration changes seem to be required to address it.

Firstly, let’s change fstype parameter of my FS resource to nfs4 to mount the NFS share with NFS4.

[mikedavem@linux04 ~]$ sudo pcs resource show FS
 Resource: FS (class=ocf provider=heartbeat type=Filesystem)
  Attributes: device=192.168.5.14:/mnt/sql_data_nfs directory=/var/opt/mssql/data fstype=nfs
  Operations: start interval=0s timeout=60 (FS-start-interval-0s)
              stop interval=0s timeout=60 (FS-stop-interval-0s)
              monitor interval=20 timeout=40 (FS-monitor-interval-20)

[mikedavem@linux04 ~]$ sudo pcs resource update FS fstype=nfs4
[mikedavem@linux04 ~]$ sudo pcs resource restart FS
FS successfully restarted

 

Then I had to perform some updates to my idmap configuration on both sides (NFS server and client as well) to make the mapping working correctly. The main steps were as follows:

  • Enabling idmap with NFS4 (disabled by default in my case)
  • Changing some parameters inside the /etc/idmap.conf
  • Verifying idmap is running correctly.
[root@nfs sql_data_nfs]# echo N >  /sys/module/nfsd/parameters/nfs4_disable_idmapping
…
[root@nfs sql_data_nfs]# grep ^[^#\;] /etc/idmapd.conf
[General]
Domain = dbi-services.test
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody
[Translation]
Method = static

[Static]
mssql@dbi-services.test = mssql
testp@dbi-services.test = testp
…
[root@nfs sql_data_nfs]# systemctl status nfs-idmap
. nfs-idmapd.service - NFSv4 ID-name mapping service
   Loaded: loaded (/usr/lib/systemd/system/nfs-idmapd.service; static; vendor preset: disabled)
   Active: active (running) since Wed 2017-02-15 20:29:57 CET; 1h 39min ago
  Process: 3362 ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS (code=exited, status=0/SUCCESS)
 Main PID: 3364 (rpc.idmapd)
   CGroup: /system.slice/nfs-idmapd.service
           └─3364 /usr/sbin/rpc.idmapd

 

At this point, listening user permissions shows nobody/nobody meaning translation is not performed yet.

[root@linux04 ~]# ls -lu /var/opt/mssql
total 16
drwxr-----. 2 nobody nobody 4096 Feb 15 19:00 data
…

 

I forgot to create a corresponding mssql user on the NFS service side. Let’s do it:

 [root@nfs sql_data_nfs]# groupadd mssql -g 993
[root@nfs sql_data_nfs]# useradd -u 995 -g 993 mssql –M

 

After remounting the NFS share, I finally got the expected output as shown below:

[root@linux04 ~]# mount -o remount -t nfs4 192.168.5.14:/mnt/sql_data_nfs/sqllinuxfci /mnt/testp/
…
[root@linux04 ~]# ls -lu /var/opt/mssql
total 16
drwxr-----. 2 mssql mssql 4096 Feb 15 19:00 data
…
[root@linux04 ~]# ls -lu /var/opt/mssql/data/*
-rwxr-----. 1 mssql mssql  4194304 Feb 15 19:53 /var/opt/mssql/data/master.mdf
-rwxr-----. 1 mssql mssql  2097152 Feb 15 19:53 /var/opt/mssql/data/mastlog.ldf
-rwxr-----. 1 mssql mssql  8388608 Feb 15 19:53 /var/opt/mssql/data/modellog.ldf
-rwxr-----. 1 mssql mssql  8388608 Feb 15 19:53 /var/opt/mssql/data/model.mdf
-rwxr-----. 1 mssql mssql 13959168 Feb 15 19:53 /var/opt/mssql/data/msdbdata.mdf
-rwxr-----. 1 mssql mssql   786432 Feb 15 19:53 /var/opt/mssql/data/msdblog.ldf
-rwxr-----. 1 mssql mssql  8388608 Feb 15 19:53 /var/opt/mssql/data/tempdb.mdf
-rwxr-----. 1 mssql mssql  8388608 Feb 15 19:53 /var/opt/mssql/data/templog.ldf

 

This time the translation is effective but let’s perform another test by running the previous command as the mssql user

[root@linux04 ~]# runuser -l mssql -c 'ls -lu /var/opt/mssql/data/*'
ls: cannot access /var/opt/mssql/data/*: Permission denied

The problem starts when I try to access the database files despite the correct mapping configuration. I spent some time to understand that some misconceptions about how NFSv4 and magic mismatch uids/gids fix subsist. I admit the main documentation is not clear about it but please, feel free to comment if it is not the case. After digging into further pointers, I was able to understand that NFS itself doesn’t achieve authentication but delegates it down to the RPC mechanism. If we take a look down at the RPC’s security, we may notice it hasn’t been updated to support such matching. Basically, it continues to use the historic authentication called AUTH_SYS meaning sending uids/gis over the network. Translation work comes later through the idmap service. The only way to get rid of this issue would be to prefer another protocol like RPCSEC_GSS which includes authentication based on LDAP or Kerberos for example.

 The bottom line here is that SQL Server on Linux is not an exception of course. If we want to continue using basic Unix authentication, keeping synchronizing uids and guids across my cluster nodes seems to be a good way to go. Using Kerberos authentication in this case? This is another story that I will try to tell in another blog post!

Happy clustering on Linux!

 

 

 

 

Cet article SQL Server Failover Cluster on Linux and synchronizing uids/gids across nodes est apparu en premier sur Blog dbi services.

Comment on Index Rebuild: Magic or Voodoo? by Andrew Sayer

Oracle Riddle Blog - Thu, 2017-02-16 12:07

Is this the sort of column that gets inserted into not-null as a flag that some task needs to be run against it? The task would then set this column to null to mark it as processed. If you had a big batch to run, or the task hadn’t run in some time (looking at your 41K buffers, that must be quite a lot of rows!) then the index would have grown – setting the column back to null isn’t going to shrink it again on it’s own but will mean the index can shrink if asked to.

The same sort of thing happens with AQ tables, except the rows get deleted (although that can some time after processing)

Like

Categories: DBA Blogs

Financial Reports - which tool to use? Part 2

Rittman Mead Consulting - Thu, 2017-02-16 12:00
Financials in BI Publisher Financial Reports - which tool to use? Part 2

I find it interesting that BI Publisher is mostly known for the creation of pixel perfect repeating forms (invoices, labels, checks, etc) and its ability to bursting them. To me, BI Publisher is the best kept secret for the most challenging reports known to mankind.

In my last blog - https://www.rittmanmead.com/blog/2017/02/financial-reports-which-tool-to-use-part-1/, I discussed some of the challenges of getting precisely formatted financial reports in OBIEE, as well as some pros and cons of using Essbase/HFR. Although we can work through difficult solutions and sometimes get the job done, BI Publisher is the tool that easily allows you to handle the strangest requirements out there!

If you have OBIEE, then you already have BI Publisher, so there is no need to purchase another tool. BI Publisher comes integrated with OBIEE, and they can both be used from the same interface. The transition between BI Publisher and OBIEE is often seamless to the user, so you don’t need to have concerns over training report consumers in another tool, or even transitioning to another url.

The BIP version that comes embedded with OBIEE 12c comes loaded with many more useful features like encryption and delivering documents to Oracle Document Cloud Service. Check out the detailed new features here: http://www.oracle.com/technetwork/middleware/bi-publisher/new-features-guide-for-12-2-1-1-3074557.pdf

In BI Publisher, you can leverage data from flat files, from different databases, from an Essbase cube, from the OBIEE RPD, from one (or multiple) OBIEE analyses, from web services and more:

Financial Reports - which tool to use? Part 2

So, if you already have very complex OBIEE analyses that you could not format properly, you can use these analyses, and all the logic in them, as sources for your perfectly formatted BI Publisher reports.

Every BI Publisher report consists of three main components:

  1. Data Model - data source that you will use across one or more reports

  2. Layout(s) - which will define how your data is presented

  3. Properties - which are related to how it generates, displays and more

You start a BI Publisher project by creating a data model that contains the different data sets that you would like to use on your report (or across multiple reports). These data sets, which reside inside of your data model, can be of the same source or can come from multiple sources and formats. If you regularly use OBIEE, you can think of a data model as the metadata for one or more reports. It is like a very small, but extremely flexible and powerful RPD.

Financial Reports - which tool to use? Part 2

Inside the data model you can connect your data sets using bind variables (which creates a hierarchical relationship between data sets), or you can leave them completely disconnected. You can also connect some of your data sets while leaving others disconnected.

The most impressive component of this tool is that it will allow you to do math from the results of disconnected data sets, without requiring ETL behind the scenes. This may be one of the requirements of a very complex financial report, and one that is very difficult to accomplish with most tools. The data model can extract and transform data within a data set, or extract only, so that it can later be transformed during your report template design!

For example, within a data set, you can create new columns to suit most requirements - they can be filtered, concatenated, or have mathematical functions applied to them, if they come from the same data source.

Financial Reports - which tool to use? Part 2

If they do not come from the same source, you can transform your data using middle tier systems, such as Microsoft Word during your template creation. You can perform math and other functions to any result that comes from any of your data sets using an RTF template, for example.

Financial Reports - which tool to use? Part 2

The example above was mentioned in Part 1 of this blog. It was created using BI Publisher and represents what I would call a "challenging report" to get done in OBIEE. The data model in this example consisted of several OBIEE analyses and their results were added/subtracted/multiplied as needed in each cell.

Financial Reports - which tool to use? Part 2

This second example was another easy transition into BI Publisher: the entire report contained 10 pages that were formatted entirely differently, one from the other. Totals from all pages needed to be added in some specific cells. Better yet, the user entered some measures at the prompt, and these measures needed to be accounted for in every sub-total and grand total. You may be asking: why prompt for a measure? Very good question indeed. In this case, there were very few measures coming from a disconnected system. They changed daily, and the preferred way for my client to deal with them was to enter them at the prompt.

So, do you always have to add apples to apples? Not necessarily! Adding apples and bananas may be meaningful to you.

Financial Reports - which tool to use? Part 2

And you can add what is meaningful with BI Publisher!

For example, here is a sample data model using sources from Excel, OBIEE and a database. As you see, two of these data sets have been joined, while the other two are disconnected:

Financial Reports - which tool to use? Part 2

A data model such as this one would allow you to issue simultaneous queries across these heterogeneous sources and combine their results in the report template. Meaning, you can add anything that you would like in a single cell. Even if it involves that measure coming from the prompt! Goes without saying, you should have the exact purpose and logic behind this machination.

Once your data model is complete: your data sets are in place, you have created the relationships within them (where applicable), you created custom columns, created your parameters and filters, then you generate some sample data (XML) and choose how you will create your actual report.

As I mentioned, there are additional functionalities that may be added when creating the report, depending on the format that you choose for your template:

Financial Reports - which tool to use? Part 2

One very simple option is to choose the online editor, which has a bit more limited formatting capability, but will allow you to interact with your results online.

In my experience, if I had to cross the bridge away from OBIEE and into BI Publisher, it is because I needed to do a lot of customization within my templates. For those customizations, I found that working with RTF templates gave me all the additional power that I could possibly be missing everywhere else. Even when my financial report had to be read by a machine, BI Publisher/RTF was able to handle it.

The power of the BI Publisher data model combined with the unlimited flexibility of the RTF templates was finally the answer to eliminate the worst excel monsters. With these two, you can recreate the most complex reports, and do it just ONCE - not every month. You can use your existing format - that you either love, or are forced to use for some reason - and reuse it within the RTF. Inside of each RTF cell, you define (once!) what that cell is supposed to be. That specific cell, and all others, will be tested and validated to produce accurate results every month.

Once this work is done, you are done forever. Or well, at least until the requirements change… So, if you are battling with any one of these monsters on a monthly basis, I highly encourage you to take a step forward and give BI Publisher a try. Once you are done with the development of your new report, you may find that you have hours per month back in your hands. Over time, many more hours than what you spent to create the report. Time worth spending.

Financial Reports - which tool to use? Part 2

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator