The day started at 05:00. I lay in the bath for 20 minutes in denial, wondering how I would manage to stay awake for the day. I’ve been ill for ages, so I felt like I was running on empty anyway. Once I had managed to drag myself out of the bath and get dressed, I picked up my laptop and took a taxi to the airport.
The taxi to the airport was smooth enough. I was already checked in and had no bags to drop off, so I went straight for the security and was greeted by the biggest queue I had ever seen at Birmingham airport. To all those people that laugh at me getting to the airport 2+ hours before a short flight like this I say, “Better to be safe than sorry!”
Despite the massive queue for security, populated by people who didn’t understand commands like, “Belts off!”, and, “All liquids out of your bags!”, the queue moved quite quickly and the departure area felt relatively quiet. I grabbed some food and logged into work to find one of the DW loads had failed. I cleaned stuff up and reset it. As I was boarding I passed one of my colleagues who was off to Glasgow for a product user group. I shouted across that his DW load had failed, then turned the corner to board before he could quiz me further.
The ChavAir flight was fine. They are a basic bitch airline, but you can’t really complain when you are paying £27 for a return flight. I overheard three people saying they paid £20 return. I was robbed.
When I arrived in Dublin, I got the AirLink Express into the city, which was 10 Euros for a return ticket and dropped me off about 100 yards from the Gresham Hotel. Bonus!
After signing in and saying hello to a couple of people, including the wife, it was off to the first session. My timetable for the day was:
- Marcin Przepiorowski with “Looking for Performance Issue in Oracle SE. Check What OraSASH Can do for You”. I’m lucky enough to have Oracle EE with the Diagnotics and Tuning pack for all the databases I work with, so I get to use the real ASH and the performance pages in Cloud Control. Even so, it’s worth keeping your eye on what others are doing, as you never know when you will need it!
- Carl Dudley with “SQL Tips, Techniques and Traps”. I really enjoyed this session. It was a quick pace with lots of little and interesting points. I’m sure everyone picked up something they had not heard before. I know I did.
- Oren Nakdimon with “Write Less (Code) with More (Oracle 12c New Features)”. This was another quick paced session made up of lots of little pointers. As I watched it I found myself thinking, “Have I written about that?”, or, “Did I include that in my article?”. There were certainly a few things that had passed me by during my time with 12c, so I made a note about them and will be revisiting a couple of articles. It was a really neat session!
- Keith Laker with “SQL Pattern Matching Deep Dive”. I’ve written some stuff on pattern matching, but this was another level. After watching this session I know enough to know I don’t know enough. Definitely a subject I need to go back and revisit. I’m always a little nervous of deep dive sessions because often they don’t deserve that title. I think this one did!
- Me with “Analytic Functions: An Oracle Developer’s Best Friend”. This was in the same room as Keith’s talk and had most of the same audience. I started by saying something to the tune of, if you understood the stuff from the previous session, you probably don’t need to watch this one. My analytics session is quite different to ones I’ve seen others do. It is an entry level session, where I repeatedly reference non-analytics stuff to try and simplify the concepts and syntax. If you have done lots of analytics it’s probably not for you, but I always get some comments from people saying they use analytics, but didn’t realise what some of the stuff did.
- Me with “Oracle Database Consolidation: It’s Not All About Oracle Database 12c!”. This is an overview session where I discuss the methods of database consolidation I use along with their pros and cons. I don’t dislike any individual method of database consolidation, but I do react harshly to anyone who claims one method is superior. There is no one-size-fits-all solution to database consolidation and anyone that tells you there is is a bloody liar! You will always need a combination of approaches and this is very much my message here. It’s a light and fluffy session, which probably fits quite well towards the end of the day when everyone is fried.
- Cloud Q&A Panel Session. I mostly turned up to support the wife, but it was actually quite relevant to my current company, who are in the procurement phase of a replacement for many of our core business systems, with “the cloud” being an option. Added to that, I’ve been doing POCs of Azure, AWS and Oracle Cloud recently for IaaS and PaaS.
From there is was a quick chat with some folks at the social event, then the AirLink Express back to Dublin Airport.
The flight back was fine, but I was starting to feel really worse for wear. At one point I thought I was going to puke, but I managed not to. I was imagining everyone else thinking I had been for a day on the lash in Dublin. We landed early and I got a taxi home and the day was done!
Big thanks to OUG Ireland for inviting me to the day. Sorry I couldn’t stay for the second day! Thanks to the other speakers and attendees, who are collectively the most important people there! Thanks to the Oracle ACE Program for letting me continue to fly the flag!
For anyone that is looking for a new conference to try out, you should give OUG Ireland 2017 a go. Just so you know, here is the breakdown of the travel costs for my day trip:
- Taxi to airport: £25
- Return flight between Birmingham and Dublin: £27
- Return trip on AirLink Express into the city: 10 Euros
- Taxi home: £35
- Total: < £100
The costs have been similar for the last three years and it’s certainly something I’m happy to pay out of my own pocket!
See you all next year!
Tim…OUG Ireland 2016 – Summary was first posted on March 8, 2016 at 3:05 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
Years ago, I got burned by an “April Fools” joke published by Steve Jones on sqlservercentral.com. He republished it as one of his favorites here.
Naturally, I had to rub my eyes today when I read that Microsoft announced that SQL Server 2016 would be coming to Linux.
There were mixed reactions on the internal SQL Server teams. I was afraid to respond to the thread, fearing I would get burned again. I quickly checked the date to confirm that the article hadn’t been resurrected.
One of the sentiments expressed in our internal chatter was that some of the DBAs love “Satya’s Microsoft” and I agree. I like what they’re doing, but I am very skeptical about the move to port SQL Server onto Linux.
I doubt this will enable new markets and new customer bases. I don’t think there are any large organizations who will suddenly decide to adopt the product because it will run on Linux.
One theory was that this move was to attract new developers who want to deploy multi-platform tech stacks. That could certainly be right, but I think PaaS satisfies that and many of the startup natures.
Other articles I read theorized it was a move towards SQL Server on Linux-powered containers.
I’m wondering what this will mean for future features. Will PowerShell or .NET be ported to Linux? What will change in the security model? Will clustering be available? Will a more RAC-like feature be available?
These are very interesting times and while this wasn’t a move that I was pining for, or even expected, I am excited to see where this is going.
I “applied” to test one of the early versions, and you can too.
What do you think? Are you excited about running SQL Server on Linux? When would you choose Linux over Windows?
One of the key objects in the Oracle Utilities Application Framework is the To Do object. It is one of the most commonly used objects and I am asked by customers and partners on various techniques they can use to manage the To Do records generated efficiently. Part of the issue with the To Do object is that it is sometimes used incorrectly causing implementation issues long term.
Before I outline some advice on how to optimize the use of To Do's I want to spend some time describing the concept of the To Do object.
Primarily the product is used to automate business processes within a utility organization (gas, electricity, water, waste water etc). Sometimes, due to some condition or data issue, typically an exception, the product cannot automatically resolve the condition or data to satisfy the business process. In this case, a human needs to intervene to correct the condition or data and allow the process to proceed (usually on the next execution of the process). In this case the automation of the business process will create a To Do record outlining the type of exception (expressed in a To Do Type), the priority of the issue, extra information, as well the links back to the relevant record that created the issue (for navigation). The product will allocate the To Do record to a To Do Role which presents the group of people that are allocated to address the exception. One of the people allocated to the To Do will work on the exception to resolve it and then mark the To Do as complete. The product will reprocess the original object that caused the exception whenever it is scheduled within the product.
In summary, whenever an exception is detected that requires human intervention, a To Do is created and then managed by designated individuals to resolve the exception for reprocessing. For example, say you are billing a customer and that customer has some information that is not complete for a successful bill to be generated. The product would raise a relevant To Do of a particular type and indicate a group of people to resolve the missing information so that the product can successfully bill that customer.
With all these facts in mind, here is my advice:
- To Do's are transient. They should be treated as such. They are created as needed and then once they are completed they should be only retained for a short time and then removed. We supply a purge process, F1-TDPG, in the products to remove completed To Do's after a period of time. We also supply an ILM based solution for To Do's as well if you wish to retain the data for longer periods. There is an article outlining the purge process.
- Do not use the To Do object for anything other than managing exceptions. I have seen it used to record business events and other data (including using it for a bespoke analytics). There are other methods for satisfying business requirements. For example, log entities can be used on most objects to record events and we also have a generic FACT object that can be used for all sorts of extensions.
- Examine each To Do Type and see if someone in your organization is actually doing anything with those To Do entries. If there is no business process to deal with the exceptions then you should reexamine whether you need to actually generate the To Do in the first place. Having To Do entries sit there and not be closed is not recommended as it would just build up slowly. If you do not have a business process for the exception then consider turning off that To Do generation. You must do this for base To Do Types as well as any custom To Do Types.
- For custom To Do Types, check for duplicate To Do entries. This does happen with customizations. When an exception occurs where a To Do needs to be generated, the customization should check if an existing To Do is already created before creating a new one.
- Examine anywhere within the product where a To Do is created and completed within the same transaction. This is a sign that probably the To Do should not be created in the first place. Consider turning off the To Do creation. If this is needed for some business process, look at running the purge process regularly to keep these optimally.
- Optimize the To Do Roles allocated to the To Do Type. The demonstration database is shipped with a single To Do Role per To Do Type. This is not the only configuration. You can use To Do roles to manage teams of people and then allocate them to the To Do Types they work on. You can have many To Do Types with many To Do Roles (and visa versa). You do need to nominate a default To Do Role on the To Do Type, which represents the default group of people to manage the To Do's of that To Do Type if no To Do Role is specified at creation time.
- The To Do Type has a number of algorithms that allow for greater control of the To Do:
- Calculate Priority - By default the priority on the To Do record is inherited from the To Do Type but it is possible to alter the Priority based upon additional information in the object or in your processing using this algorithm.
- External Routing - Routing To Do information to external systems or other objects.
- To Do Post Processing - Process the To Do after it is created or updated. For example, if the To Do is updated you can use this algorithm to pass additional information or state to another system or dashboard application.
These are some of the techniques that will optimize your experiences with To Do. Remember the volume of To Do's is really an indicator of your data quality so improving the quality of data is also a valid technique for minimizing the management of To Do.
There is additional advice on optimal managing of To Do in the whitepaper Overview and Guidelines for Managing Business Exceptions and Errors (Doc Id: 1628358.1) from My Oracle Support.
Clients are always concerned about the performance impact of features like this. Several years ago, I met a lot of people who had—in response to some expensive advice with which I strongly disagreed—turned off redo logging with an underscore parameter. The performance they would get from doing this would set the expectation level in their mind, which would cause them to resist (strenuously!) any notion of switching this [now horribly expensive] logging back on. Of course, it makes you wish that it had never even been a parameter.
I believe that the right analysis is to think clearly about risk. Risk is a non-technical word in most people’s minds, but in finance courses they teach that risk is quantifiable as a probability distribution. For example, you can calculate the probability that a disk will go bad in your system today. For disks, it’s not too difficult, because vendors do those calculations (MTTF) for us. But the probability that you’ll wish you had set db_block_checksum=full yesterday is probably more difficult to compute.
From a psychology perspective, customers would be happier if their systems had db_block_checksum set to full or typical to begin with. Then in response to the question,
“Would you like to remove your safety net in exchange for going between 1% and 10% faster? Here’s the horror you might face if you do it...”...I’d wager that most people would say no, thank you. They will react emotionally to the idea of their safety net being taken away.
But with the baseline of its being turned off to begin with, the question is,
“Would you like to install a safety net in exchange for slowing your system down between 1% and 10%? Here’s the horror you might face if you don’t...”...I’d wager that most people would answer no, thank you, even though this verdict that is opposite to the one I predicted above. They will react emotionally to the idea of their performance being taken away.
Most people have a strong propensity toward loss aversion. They tend to prefer avoiding losses over acquiring gains. If they already have a safety net, they won’t want to lose it. If they don’t have the safety net they need, they’ll feel averse to losing performance to get one. It ends up being a problem more about psychology than technology.
The only tools I know to help people make the right decision are:
- Talk to good salespeople about how they overcome the psychology issue. They have to deal with it every day.
- Give concrete evidence. Compute the probabilities. Tell the stories of how bad it is to have insufficient protection. Explain that any software feature that provides a benefit is going to cost some system capacity (just like a new report, for example), and that this safety feature is worth the cost. Make sure that when you size systems, you include the incremental capacity cost of switching to db_block_checksum=full.
When you read David’s article, you are going to see heavy quoting of my post here in his intro. He did that with my full support. (He wrote his article when my article here wasn’t an article yet.) If you feel like you’ve read it before, just keep reading. You really, really need to see what David has written, beginning with the question:
If I’ve never faced a corruption, and I have good backup strategy, my disks are mirrored, and I have a great database backup strategy, then why do I need to set these kinds of parameters that will impact my performance?Enjoy.
While people normally worry more about device architecture & interaction more in Mobile & IoT implementations, it might be the right time to start taking a closer look at the back end platforms. Mobile and IoT Platforms have come a long way in terms of features and customer adoption. The idea is to help you scale your projects, better integrate & analyze, and address security concerns. Please join below webcasts to hear Oracle's story around -
For selected few attendees, we will also offer onsite workshops to get deeper into your use cases and help bring your ideas to fruition. Look forward to interact with you on these game changing initiatives.
==> I have put the correct registration links as previous versions were internal only
The Percona Live Data Performance Conference in Santa Clara is being held April 18-22, 2016. It is quickly approaching, and Pythian is going to show you how we Love Your Data in a big way!
We have an awesome lineup of speakers this year:
- Alkin Tezuysal, Okan Buyukyilmaz, and Emanuel Calvo will be presenting the Break/Fix Lab tutorial. This is becoming a standard so if you haven’t had the opportunity to participate, don’t miss it!
- Christos Soulios will be presenting a tutorial on MongoDB design patterns with Pythian alum Nik Vyzas and Percona’s Roman Vynar.
- Derek Downey is co-presenting with HashiCorp’s CTO Arman Dadger on using vault to decouple secrets from your applications.
- Martin Arrieta will be hosting a Birds-of-a-Feather session on the best practices of running XtraDB Cluster with HAProxy.
- John Schulz will show you how to shard effectively whether you run MySQL, MongoDB or Cassandra.
Mark these down in your schedule because you are not going to want to miss any of them! Although, you might have a tough time choosing between the tutorials if you can’t clone yourself.
Oracle released Zero Data Loss Recovery Appliance in 2014. The Recovery Appliance was designed to ensure efficient and consistent Oracle Database Backups with a very key focus on Recovery.
I am going to write a series of blogs starting with this one to discuss the fundamental architecture of the Recovery Appliance and discuss the business case as well as deployment and operational strategies around the Recovery Appliance.
So Lets start with why an Appliance. Oracle has had a very interesting strategy start from way before the sun Acquisition. The Exadata was a prime example of a Database Machine that was optimized for Database Workloads. The Engineered Systems Family has since grown to include the smaller Oracle Database Appliance to the currently newest member of the family Zero Data loss recovery Appliance.
Now Lets Start with the Basics . The Recovery Appliance as the name suggests is an Appliance built to solve Data Protection gaps that most customers face , when trying to ensure their critical data that most often resides in the Oracle Database. So why recovery appliance and why now. Over the years Data storage has continued to grow and so does the amount of data stored in databases, where once a couple of GB’s of data was a big deal, today organizations are dealing with Petabytes of Database Storage. Database’s backups are getting harder and harder to manage and modern Backup Appliances have a focus on getting more out of the storage rather than provide a way to ensure recoverability and don’t have a good enough method to ensure that backups are valid. The Recovery Appliance is designed to solve these challenges and give customer an autopilot for their backups.
The name Recovery appliance suggests how much emphasis was put forward in ensuring recoverability of the database, and hence there were controls put in place to ensure everything is validated not just once , but on a regular basis, with extensive reporting made available.Backups are a very important part of every enterprise and the Recovery Appliance brings the ability to perform an incremental forever backup strategy. The incremental forever strategy as the name suggests provides for one full backup (Level 0 ) followed by subsequent incrementals (Level 1 ) Backups. This in conjunction with Protection Policies that ensure a recovery window is maintained , thus providing the autopilot that ensures backups are successful with very little overhead on the machine that is taking the backup. This is done by offloading the de-duplication and compression activities to the Recovery Appliance.
So far i’ve used terminologies like Protection Policies , De-duplication , compression etc. While these terminologies are common in the backup space , too often people have a hard time making the connection. So lets start by a brief definition of each term
When a Complete Backup of the database is taken, This is called a Full Backup and in a traditional environment, this can be done daily or weekly , depending on the backup strategy . Traditional Backup appliances rely on these full to provide De-duplication capabilities. Full backup require a lot of overhead since all blocks have to be read from the I/O subsystem and processed by database host.
Incremental backups as the name suggests is the ability to take backups of data blocks that have changed since the previous backups. The Oracle Backup and Recovery Users Guide is the best place to understand the incremental backup strategy and how that can be employed in terms of a backup strategy.
De-duplication is a technique to eliminate duplicate copies of repeating data. This technique is typically employed with flat files or text based data since you can find a better repeating . Incremental Backups are a poor source to de-dup since there is not much data that is repeating and due to the unique structure of the Oracle block , it makes it hard to get a lot of de-duplication.
Compression is act of shrinking data and Oracle provides various methods of compressing data within the database and with the rman backup process itself.
In Part 2 of this Blog post i will talk about some of the terminologies likes protection policies and incremental forever strategy as well as dicuss the architecture of the Recovery Appliance.
RSAConference 2016 Where the world talks security
40,000 attendees, 500+ vendors and 700 sessions
RSAC is my annual check in to learn new approaches to information security, discover new technology, learn from industry experts and build my community.
In the three years that I have been attending RSAC, I have learned that Pythian is unique and so are our clients. Each year, we continue to improve our security program with our clients in mind.
RSAC Day 1
It’s Day 1 of the RSAConference 2016. Monday’s are typically a quiet day with vendors setting up in the expo halls, conference staff getting organized, attendees registering and a few press/analysts looking for optimal interview spots. It has been the calm before the storm of attendees descend on San Francisco and RSAC.
This Monday was a whirlwind of activity; CSA Summit, DevOps Connect, Information Security Leadership Development and IAPP: Privacy and Security to name only a few. Chances are you may have missed sessions if you weren’t early enough.
Privacy and Security were hot topics given the European General Data Protection Regulation (GDPR) agreement reached December 2015.
Today’s digital workplace requires going beyond simple file sharing in the Cloud to delivering the next wave of productivity, efficiency, and workgroup innovation. Agencies and organizations need services that blend content, people, process and communications--enabling better and faster decisions while accelerating how work gets done. Unlike first generation content-only Cloud vendors, Oracle provides an integrated productivity suite of Cloud services that helps business communicate more effectively by automating business processes involving content.
Adapting existing systems to meet today’s needs face many challenges, including:
- Support multi-channel requirements
- Simplify communications to include content rich business processes that span multiple applications
- Enable mobile applications for field workers who need access to content in context with applications
- Convenient file sharing and collaboration, anywhere, anytime, via any device
- Simplified process automation – business friendly composition, configurable rules, auto-generated forms, process health and SLA monitoring
- Actionable alerts and security controls
- Integrations with SaaS and On-Premise applications
- Mobile Web, interactive content, and rich Websites
Please join Oracle and TekStream on March 10th to understand how you can take advantage of a transformative, Cloud-based, digital experience for your organization.
We look forward to seeing you!
Register Now Mar 10, 2016
10:00 am PST |
1:00 pm EST Copyright © 2015, Oracle Corporation and/or its affiliates.
We communicate every day. Communication through text is especially abundant with the proliferation of new on-demand technologies. Have you gone through your emails today? Have you read the news, weather, or blogs (like this one)? Communication is the backbone to every interpersonal interaction. Without it, we are left guessing and assuming.
BI implementations are no exception when it comes to communication’s importance, and I would argue communication is a major component of every BI environment. The goal of any BI application is to discover and expose actionable information from data, but without collaboration, discovering insights becomes difficult. By allowing users to collaborate immediately in the BI application, new insights can be discovered quicker.
Any BI conversation should maintain its own dedicated communication channel, and the optimal place for these conversations is as close to the information-consumption phase as possible. By allowing users to collaborate in discussions over results at the same location as the data, users will be empowered to extract as much information as possible.
Unfortunately, commentary support is absent from OBIEE.The Current OBIEE Communication Model
The lack of commentary support does not stop the community from developing their own methods or approaches to communicating within their BI environments. Right now, common approaches include purchasing pre-developed software, engineering custom solutions, or forcing the conversations into other channels.
Purchasing a commentary application or developing your own internal solutions expedites the user communication process. However, what about those who do not find a solution, and instead decide to use a “work-around” approach?
Choosing to ignore the missing functionality is the cheapest approach, initially, but may actually cost more in the long run. To engage in simple conversations, users are required to leave the BI dashboard, which adds time and difficulty to their daily processes. And reiterating the context of a conversation is both time consuming and error prone.
Additionally, which communication channel will the BI conversations invade? A dedicated communication channel, built specifically to easily display and relay the BI topics of interest, is the most efficient, and beneficial, solution.How ChitChat Can Help
ChitChat provides a channel of communication directly within the BI environment, allowing users to engage in conversations as close to the data consumption phase as possible. Users will never be required to leave the BI application to engage in a conversation about the data, and they won’t need to reiterate the environment through screenshots or descriptions.
Recognizing the importance of separate channels of communication, ChitChat also easily allows each channel to maintain their respective scopes. For instance, a user may discover an error on a BI dashboard. Rather than simply identifying the error in the BI environment, the user can export the comment to Atlassian JIRA and create a ticket for the issue to be resolved, thus maintaining the appropriate scopes of both JIRA and ChitChat. Integrations allow existing channels of communication to maintain their respective importance, and appropriately restrict the scope of conversations.
ChitChat is placed in the most opportune location for BI commentary, while maintaining the correct scope of the conversation. Other approaches often ignore one of these two aspects of BI commentary, but both are required to efficiently support a community within a BI environment. The most effective solution is not one that simply solves the problem, or meets some of the criteria, but the solution that meets all of the requirements.Commentary Made Simple
Conversation around a BI environment will always occur, regardless of the supporting infrastructure or difficulty in doing so. Rather than forcing users to spend time working around common obstacles or developing their own solutions, investing in an embedded application will save both time and money. These offerings will not only meet the basic requirements, but also ensure the best experience for users, and the most return on investment.
Providing users the exact features they need, where they need it, is one step in nurturing a healthy BI environment, and ChitChat is an excellent solution to meet these criteria.
To find out more about ChitChat, or to request a demo, click here!