Skip navigation.

DBA Blogs

FMW & PaaS Webcasts Series For Partners in November

Are you able to attend Oracle OpenWorld sessions? Even though you are, you might not have enough time to visit all the sessions. Join us for the FY16 Oracle Cloud Platform and FMW Webcast Series...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Tuesday OakTable World – brain fried!

Bobby Durrett's DBA Blog - Tue, 2015-10-27 19:08

Instead of going to the normal OpenWorld events today I went to OakTable World.  Now my brain is fried from information overload. :)

It started at 8 am with a nice talk about hash joins and Bloom filters.  Toon Koppelaars had some nice moving graphics showing how bloom filters work.  I’ve studied Bloom filters before but I’m not sure I understood it with the clarity that I had after this talk.

Then I did my talk at 9 am.  The best part for me was that we had a number of questions.  I ended up skipping several slides because of time but I felt like we helped people get what they wanted out of it by having the questions and discussion.  In retrospect my talk could have used more of an introduction to Delphix itself for this audience but I think we covered the essentials in the end.

Next Kellyn Pot’Vin-Gorman did more of a community service type of talk which was a change of pace.  She had a Raspberry Pi project which was a stuffed bear that would take your picture and post it on Twitter.  It was an example of the type of project that kids could do to get them interested in computer technology.

My brain began to turn to mush with Marco Gralike’s talk on XML and 12c In-Memory Column store.  I’m sure I’m absorbed something but I’m not that familiar with Oracle’s XML features.  Still, at least I know that there are in memory features for XML which I can file away for the future.

Several amusing 10 minute Ted talks followed.  Most notable to me was Jonathan Lewis’ talk about how virtual columns and constraints on virtual columns could improve cardinality estimates and thus query performance.

Cary Millsap talked about a variety of things including things like what he covered in his book.  He shared how he and Jeff Holt were hacking into what I assume is the C standard library to diagnose database performance issues, which was pretty techy.

Gwen Shapira’s talk on Kafka was a departure from the Oracle database topics but it was interesting to hear about this sort of queuing or logging service.  Reminds me in some ways of GGS and Tibco that we use at work but I’m sure it has different features.

Alex Gorbachev gave a high level overview of Internet of Things architectures.  This boiled down to how to connect many possibly low power devices to something that can gather the information and use it in many ways.

Lastly, we went back to the Oracle database and my brain slowly ground to a halt listening to Chris Antognini’s talk on Adaptive Dynamic Sampling.  I had studied this for my OCP but it has slowly leaked out of my brain and by 4 pm I wasn’t 100% efficient.  But, I got a few ideas about things that I can adjust when tuning this feature.

Anyway, brief overview.  I’m back to normal OpenWorld tomorrow but it was all OakTable today.  It was a good experience and I appreciated the chance to speak as well as to listen.

Bobby

Categories: DBA Blogs

Oracle Fusion Middleware 12c (12.2.1.0.0) now Generally Available

Oracle Fusion Middleware is the leading business innovation platform for the enterprise and the cloud. It enables enterprises to create and run agile, intelligent business applications while...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pythian More Innovative Than GE?

Pythian Group - Tue, 2015-10-27 08:41

 

GE is known as an innovative company and according to Forbes is one of the worlds most valuable brands. Late summer they made headlines with the announcement that they were retiring performance management reviews.

No big deal? Think again. GE has built a reputation on Jack Welch’s ridgid performance management programs, cutting the bottom 10 percent of under performers each year.

First, I applaud GE for the bold move. Any company wide change in an organization the size of GE’s would be challenging. Second, what took you so long? In 2011, Pythian acted against the crowd and ditched the prevailing zeitgeist by implementing a performance feedback program.

At the time, with approximately one hundred employees and a brand new HR team we were just beginning to establish our HR programs. Like many small companies, we did not have a structured performance management program. We were growing rapidly and identified a need to provide employees with useful feedback and career discussions.

Ratings, rankings and bell curves, “Oh My!” We didn’t even consider them. We designed a program that removed standard performance management elements like numerical rankings. Our program focus was created to facilitate formal, organized feedback discussions, in a comfortable environment, between an employee and their manager. The idea was to base the discussion on each team member’s career aspirations and journey. During a new hire orientation, the first steps of the career journey begin. Following every six months we schedule time to sit down and have focused career feedback discussions. During these discussions, goals are established, recent successes reviewed, progress on goals updated, and challenges chronicled with suggestions to overcome. Furthermore, career aspirations and plans for professional development are discussed and established.

The feedback program is constantly evolving and improving to meet the changing needs and scale of the company.  Of course we listen to employee feedback about the program and implement changes after a review of the suggestions. Change can be difficult for people. Initially, employees more accustomed to traditional performance management were hesitant, but they quickly responded to the easy and relaxed format of our program.

Regular feedback is key. We encourage two way feedback: up and/or down, across the organization, in real time, all the time. We are always working to improve our programs, processes and ideas, e.g. “upping our game” as a company.   We believe it’s important to build a culture of constant feedback. A culture of two way feedback built on trust and transparency is a team effort by all members of the Pythian Team.

During orientation I enjoy encouraging and empowering all new employees with permission to ask their leaders for feedback anytime. I encourage them to not wait to share what’s going well and to disclose where they need extra support/further direction, etc. In my own team meetings I inquire what I could be doing more of and less of. How can I be a better communicator and leader. I create a safe environment for the team to provide feedback so we can collectively improve.

It will be interesting to see if GE’s announcement encourages more companies to re-evaluate their approach to Performance Management systems and encourage more effective dialogue and feedback discussions with their employees.

Categories: DBA Blogs

SQL Server Row Level Security

Pythian Group - Tue, 2015-10-27 08:27

 

Row Level Security (RLS) has been implemented in SQL Server 2016 for both on-premise and v12 of Azure instances.

The problem this solves is: a company with multiple applications accessing sensitive data in one or more tables.

How do you ensure the data being read or written is only the data that login is authorized to see? In the past, this has been accomplished with a complicated series of views or functions, and there’s no guarantee a bug or malicious user wouldn’t be able to bypass those measures. With Row Level Security, it doesn’t matter what privileges you have (including sysadmin) or how you try to access the data.

How it Works

Row Level Security has two options: You can either FILTER the rows or BLOCK the operation entirely. The BLOCK functionality is not yet implemented in CTP 2.4, but the FILTER logic works like a charm.

The steps are very simple:
1 – Figure out what you’re going to associate with your users and data. You will need to create some link between your data and a login’s or user’s properties. Something that will allow the engine to say This Row is ok for This User.

2 – Create a Function defining the relationship between users and the data.

3 – Create a Security Policy for the function and table(s). You can use the same policy on multiple tables or views.

Once the Security Policy has been created, every query or DML operation on the tables or views you’re filtering will automatically have the function applied to the WHERE or HAVING clause. You can see the filter working by reviewing the execution plan as well. SQL Server will generate the Plan Hash value with the filtering logic in place. This allows Plan Re-Use with Row Level Security, and it’s a big improvement over Oracle’s implementation which doesn’t do this (as of Oracle 10g, the last time I worked with it) :-). See the bottom of this post for an example of the same query with RLS turned on & off.

What is particularly nice about these policies is that the data is filtered regardless of the user’s privileges. A sysadmin or other superuser who disables the policy is just an Audit log review away from having to explain what they were doing.

Row Level Security Walk Through

This is an example of setting up an RLS system for the Credit Card data in the AdventureWorks database. After this is completed, only users associated with a Business Entity in the Person.Person table will be able to see or update any credit card information, and the data they can touch will be limited to just their business.

Step 1: Add user_name column to Person.Person table

In this example, I’m associating the user_name() function’s value for each login with the BusinessEntityID. Of course, you can use any value you want, as long as you can access it from a SELECT statement in a Schema-Bound function. This means many system tables are off-limits.


USE AdventureWorks
GO

ALTER TABLE person.person
ADD UserName nvarchar(128) NULL;

— Associate some person.person rows with a login too.
UPDATE person.person
SET UserName = ‘Business1’
WHERE
BusinessEntityID IN (301, 303, 305);

Step 2: Create Users to Test

I’m just creating a login named Business1 to demonstrate this. Note that the user has db_owner in AdventureWorks


USE [master] GO
CREATE LOGIN [business1] WITH PASSWORD=N'******', DEFAULT_DATABASE=[master], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO
USE [AdventureWorks] GO
CREATE USER [business1] FOR LOGIN [business1] GO
USE [AdventureWorks] GO
ALTER ROLE [db_owner] ADD MEMBER [business1] GO

Step 3: Create Function to Filter Data

This function finds all credit cards for the user_name() running the query. Any values not returned by this function will be inaccessible to this user.


CREATE FUNCTION [Sales].[fn_FindBusinessCreditCard] (@CreditCardID INT)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN

SELECT
1 AS result
FROM
person.person p INNER JOIN
sales.PersonCreditCard pcc ON p.BusinessEntityID = pcc.BusinessEntityID INNER JOIN
sales.CreditCard cc ON pcc.CreditCardID = cc.CreditCardID
WHERE
cc.CreditCardID = @CreditCardID AND
p.UserName = user_name();

Step 4: Create a Security Policy

This creates a security policy on the Sales.CreditCard table.


CREATE SECURITY POLICY sales.RestrictCreditCardToBusinessEntity
ADD FILTER PREDICATE sales.fn_FindBusinessCreditCard(CreditCardID)
ON Sales.CreditCard
WITH (STATE = ON);

Step 5: Test Away

For all of the following examples, You should be logged in as the Business1 user who can only see 3 credit cards. In reality, there are 19,118 rows in that table.


--Will return three records
SELECT
pcc.*, cc.*
FROM
Sales.PersonCreditCard pcc INNER JOIN
Sales.CreditCard cc ON cc.CreditCardID = pcc.CreditCardID
ORDER BY
pcc.BusinessEntityID

BEGIN TRAN
— Will only update three records
UPDATE Sales.CreditCard
SET
ExpYear = ‘2020’

These are the execution plans for the above query with Row Level Security turned on and off:

Turned On: (and missing an index…)

Execution Plan With Security Turned On

Execution Plan With Security Turned On.

Turned Off:

Execution Plan With Security Turned Off

Execution Plan With Security Turned Off.

 

Discover more about our expertise in SQL Server.

Categories: DBA Blogs

A Cassandra Consistency Use Case

Pythian Group - Tue, 2015-10-27 08:07

 

I recently completed a project where I worked with a company using Cassandra to keep metadata about objects stored in an Object Store. The application keeps track of individual objects as rows within a partition based on user id. In an effort to save space there is also a mechanism to track duplicate references to the objects in another table. Object writes take place as background activity and the time it takes to complete those writes is invisible to the applications end users. The time it takes to retrieve an object though is very visible to the end user. The keyspace was defined as network topology using two data centers (actual data centers here about 50 ms apart) with replication factor 3 in both data centers.

Initially the application was set up to use consistency ONE for both writes and reads. This seemed to be working okay until we started doing failure testing. At which point objects would come up missing due to the delay time in pushing hinted hand offs from one node to another. A simple solution to this was to make all writes and reads LOCAL_QUORUM. In fact doing so did resolve pretty much all of the testing errors but at a much increased latency, about 3 times longer than with consistency ONE, on both reads and writes. Even so, the latencies were deemed to be acceptable since they were still well under anticipated network latencies outside of the data centers which is what the users would be seeing.

Could we have done better than that though?

The writes are a background activity not visible to the end user. The increased write latency is probably reasonable there. The read latency is visible to the user. There is an option which guarantees finding the stored object references while still keeping the latency to a minimum. This is what I propose, the default read consistency is set back to ONE and most of the time a read to Cassandra will find the object reference as was clear in the initial testing. But, if a read returns no object reference then a second read is issued using LOCAL_QUORUM. This way most, more than 99%, of all reads are satisfied with the much lower latency consistency of ONE only occasionally needing the second read. This can be extended further to a full QUORUM read if the LOCAL_QUORUM read fails.

It is important to note that this approach only works if there are no row versions. E.G. rows only exist or do not exist. If a row may have different versions over time as you might have if the row were updated rather than just inserted and then later deleted. It is also important to note that its possible to find a deleted row this way. For this use case these qualifications are not issues.

 

Discover more about our expertise in Cassandra.

Categories: DBA Blogs

Running SQL Server Jobs Against an Azure SQL Database

Pythian Group - Tue, 2015-10-27 07:50

 

I recently had a client ask how to run SQL Server jobs against an Azure SQL Database, and because SQL DB doesn’t have an SQL Agent like most other versions of SQL Server, it isn’t always obvious how to implement. Fortunately, we have several options in Azure and within a “normal” instance.

Options to Run Jobs

The first three options require a non-Azure version of SQL Server to be running and to have connectivity to the Azure SQL DB.

  1. Linked Server
  2. Maintenance Plan w/new connection
  3. Powershell Script
  4. Azure Services
Elastic DB Jobs

As mentioned by Warner Chaves in the comments, and currently (Nov 2015) in preview, the new Elastic Database Jobs might also work well.

Linked Server

To create a linked server to an Azure DB, you just need to get the ODBC connection string from the Azure portal and use it as the “Provider String” when defining the LS.

ODBCConnectionString

ODBCConnectionString

 

Maintenance Plan w/New Connection

Simply create your maintenance plan with all the required logic for your job, then click the “Manage Connections” menu at the top of the screen and define the new connection to be used for this plan.

MaintenancePlan

MaintenancePlan

 

Powershell Script

You can use the Powershell Invoke-SqlCmd functionality and run it as a Powershell job step.


Invoke-Sqlcmd -Query "SELECT GETDATE() AS TimeOfQuery;" -ServerInstance "AzurreInstance" -U "UserName" -P "Password"

Azure Services

These change, it seems like, almost daily, so I’ve listed some of the common ways this is currently done.

  1. The Azure Scheduler used in conjunction with mobile services.
  2. Azure Automation

 

Discover more about our expertise in Cloud and SQL Server.

Categories: DBA Blogs

Monday OpenWorld

Bobby Durrett's DBA Blog - Mon, 2015-10-26 22:10

It was a good first full day at Oracle OpenWorld.  It started with the keynote led by Oracle’s CEO.  Of course he was very upbeat about Oracle’s products.  But, I found his comments about the economy and the way it affects IT spending more interesting than Oracle’s cloud products.  My translation is that companies have a lot of older systems that they can’t easily retire or upgrade but they want to quickly add all this new functionality.  I see that in my company so it rings true.  I don’t really believe that the cloud is the long-term answer but it makes me wonder what the real answer is.  I always come back to prioritization.  I think prioritizing spending is more important than moving things to the cloud.  You can’t afford to do everything so you have to make tough choices about what to spend your IT dollars on. That’s my opinion at least.

Next I went to a session on Coherence.  I kind of went out of a sense of obligation since our company owns the product.  But, it was a surprisingly good session.  It had a lot in it about Java 8 and the new features in it for parallel processing.  It made me want to dust off my Java skills and generally think about parallel processing in the Oracle database and how it relates to that in Hadoop, Scala, etc.

I went to two sessions on analytics, again out of a sense that I needed to learn about analytics and not due to any enthusiasm about it.  The first session was really interesting, but the 3:30 session almost literally put me to sleep.  The first session reminded me of some of the things in Craig Shallahamer’s forecasting book such as making a model of a system and doing validation of the model.  Analytics seems to follow a similar process. But, by the late afternoon a non-technical session on analytics in banking nearly knocked me out.

Wedged between my two analytics sessions I went to a very cool In-Memory Option boot camp.  I have not had the time or taken the time to look at the In-Memory Option and I got a nice fast hour-long exposure to it.  I don’t know if the other people in the class were lost because there were a lot of explain plans flying by but it is the type of stuff I’m interested in so it was nice that it was so technical.  The In-Memory Option reminded me a lot of Exadata smart scans and hybrid columnar compression.

Strangely enough multiple speakers pronounced columnar differently than I have done so I guess I will have to change.  They emphasize the second syllable but I usually emphasize the first.

I also snuck in to the OakTable World presentation by Tanel Poder.  It had to do with querying Hadoop clusters from Oracle databases using odbc/jdbc.  Pretty cool.  I also got to scope out the venue for my talk tomorrow in the process.

That’s enough for today.  I got a lot of good information.  I’ve slotted tomorrow for OakTable world so it will be interesting to see what all I can learn there.

Bobby

 

 

 

Categories: DBA Blogs

EMEA Partners: Oracle Project Portfolio Management Cloud Implementation Workshops

Oracle will organize several 5-days Workshops between November 2015 – May 2016. We are pleased to invite your ERP Practice to a 5-days hands-on Oracle Project Portfolio Management...

We share our skills to maximize your revenue!
Categories: DBA Blogs

EMEA Partners: Oracle Procurement Cloud Implementation Workshops

Oracle will organize several 5-days Workshops between November 2015 – May 2016. We are pleased to invite your ERP Practice to a 5-days hands-on Oracle Procurement Cloud Implementation...

We share our skills to maximize your revenue!
Categories: DBA Blogs

EMEA Partners: Oracle Financials Cloud Implementation Workshops

Oracle will organize several 5-days Workshops between November 2015 – May 2016. We are pleased to invite your Financials Cloud Practice to a 5-days hands-on Oracle Financials Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #446: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-10-23 15:25

This Log Buffer Edition covers some more Oracle, SQL Server and MySQL blog posts from across the world.

Oracle:

  • Should I monitor the MGMTDB database?
  • Connecting MAF Applications to Mobile Cloud Service
  • JRE 6, 7, and 8 now certified with EBS 11i, 12.1, and 12.2
  • The database writer copies data blocks from the buffer cache onto disk. The db_writer_processes initialization parameter determines how many processes will do this task. Its default value is 1 or cpu_count / 8, whichever is greater. I found an Oracle 9 database on a Tru64 server with cpu_count set to 1
  • How To Install Latest Verisign G5 Root Certificates

SQL Server:

  • Dynamic Pivot, Passing Temp Table Variables to Dynamic SQL
  • Oracle offers a results cache in the database (from 11.2 onwards) and in PL/SQL (again, from 11.2 onwards) which can greatly reduce execution time of repeated statements when insert/update/delete activity is not heavy.
  • Maintaining a grouped running MAX (or MIN)
  • Harnessing SQL Server Metadata- Disabling and Rebuilding Indexes
  • Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator

MySQL:

  • ClusterControl Tips & Tricks: Updating your MySQL Configuration
  • Become a MySQL DBA blog series – Optimizer Hints for faster query execution
  • Loading Arbitary XML documents into MySQL tables with p_load_xml
  • Use MySQL and Perl to automatically find, follow and unfollow twitter users
  • Great Advice on Monitoring Amazon RDS For MySQL From Datadog

Learn more about Pythian’s expertise in Oracle SQL Server & MySQL.

Categories: DBA Blogs

Reviewing Delphix blog posts before OpenWorld

Bobby Durrett's DBA Blog - Thu, 2015-10-22 10:13

I just finished reviewing my Delphix blog posts in preparation for the talks that I will give during OpenWorld. I find myself referring back to my blog to help me remember what I have done in the past. I was thinking that I needed to jog my memory so that I could answer questions when I give my talks.  Some of the Delphix topics that I am speaking about occurred a year or two ago so my memories are fuzzy.  But, reading my own posts brought a lot of the details back.

I thought I would list the Delphix posts, even though people can find these just by searching my web site. If you are coming to my talk and want more details or if you just want all the Delphix related information on my site I have collected it here.

http://www.bobbydurrettdba.com/2013/02/26/delphix-first-month/

http://www.bobbydurrettdba.com/2013/09/17/delphix-direct-io-and-direct-path-reads/

http://www.bobbydurrettdba.com/2014/01/08/delphix-upgrade-in-twenty-minutes/

http://www.bobbydurrettdba.com/2014/07/08/used-delphix-to-quickly-recover-ten-production-tables/

http://www.bobbydurrettdba.com/2015/03/11/delphix-user-group-presentation/

http://www.bobbydurrettdba.com/2015/10/07/my-delphix-presentation-at-oaktable-world/

I have two talks scheduled during OpenWorld next week.

The first talk is at OakTable World at 9 am on Tuesday.

The second talk is at Delphix Sync between 3 and 4 pm Thursday.  The second talk is a ten minute “lightning talk” version of the first.

I hope to see you at one of the talks and if not I hope this post is valuable to you.

Bobby

Categories: DBA Blogs

Speaking at SANGAM 2015

Oracle in Action - Wed, 2015-10-21 00:27

RSS content

AIOUG meet “SANGAM  – Meeting of Minds” is the Largest Independent Oracle Event in India, organized annually in the month of November. This year’s Sangam (Sangam15 - 7th Annual Oracle Users Group Conference) will be held in Hyderabad International Convention Centre, Hyderabad on Saturday 21st & Sunday 22nd November 2015.

I will be speaking at this year’s SANGAM about Oracle Database 12c new feature : Highly Available NFS (HANFS) over ACFS.

HANFS over ACFS enables highly available NFS servers to be configured using Oracle ACFS clusters. The NFS exports are exposed through Highly Available VIPs (HAVIPs), and this allows Oracle’s Clusterware agents to ensure that HAVIPs and NFS exports are always available. If the node hosting the export(s) fails, the corresponding HAVIP and hence its corresponding NFS export(s) will automatically fail over to one of the surviving nodes so that the NFS client continues to receive uninterrupted service of NFS exported paths.

My session will be held on Saturday November 21, 2015  from 5:10pm to 6:00pm in
Hall 5 (Ground Floor). 

Hope to meet you there!!

 

 



Tags:  

Del.icio.us
Digg

Comments:  3 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [Speaking at SANGAM 2015], All Right Reserved. 2015.

The post Speaking at SANGAM 2015 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Richard Foote: Upcoming Presentation Events (David Live)

Richard Foote - Mon, 2015-10-19 18:28
I’ll be doing the rounds in Australia/NZ in the coming weeks so plenty of opportunity to catch-up and see me in action :) I’ll be doing a “National Tour” of my Let’s Talk Oracle Database sessions that I’ve been running locally in Canberra for a number of years. All events have limited places available so […]
Categories: DBA Blogs

Richard Foote: Upcoming Presentation Events (David Live)

Richard Foote - Mon, 2015-10-19 18:28
I’ll be doing the rounds in Australia/NZ in the coming weeks so plenty of opportunity to catch-up and see me in action :) I’ll be doing a “National Tour” of my Let’s Talk Oracle Database sessions that I’ve been running locally in Canberra for a number of years. All events have limited places available so […]
Categories: DBA Blogs

Analyze database activity using v$log_history

DBA Scripts and Articles - Wed, 2015-09-09 12:26

The v$log_history view contains important information on how application’s users use the database , this view can help you define periods with the most activity in the database. v$log_history queries You can adapt the query to your needs, you just have to change the way you format the date to be able to drilldown to the … Continue reading Analyze database activity using v$log_history →

The post Analyze database activity using v$log_history appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

John King in Cleveland CTOWN for September meeting

Grumpy old DBA - Mon, 2015-08-17 04:50
For our September 18 friday afternoon quarterly meeting Northeast Ohio Oracle Users Group is lucky to have a great speaker Oracle Ace Director John King presenting.  It's the usual networking and free lunch beginning at noon at the Rockside Oracle offices followed by quick business meeting and presentations starting at 1 pm.

Full details on NEOOUG Sept 2015 presentations

November meeting we have Daniel Morgan woo hoo!
Categories: DBA Blogs