Namun perlu di ingat. Model Base seperti ini pasti memiliki kelemahan. Ya, sudah tentu mudah terbobol. Namun jika kamun sedang mengalami kejenuhan dapat di gunakan sebagai Base sementara, atau dapat di pakai saat sedang Prepare War. Namun jikan war sudah di mulai segeralah untuk menggantinya. Tanpa bicara banyak lagi, langsung sajan berikut adalah kumpulan base lucu, unik, aneh TH 7, 8, 9 dan 10 Terbaru.
Base COC UnikTH 9
Silahkan sobat untuk mendownloadnya. Semoga saja artikel ini dapat membantu teman-teman yang sedang mencari gambar base lucu, unik, dan aneh.
- Engineered Systems Partner of the Year Award
- Business Analytics Partner of the Year Award
- Training Partner of the Year Award
- Emerging Partner of the Year Award
The awards are decided by “end users of Oracle-related products or services” i.e. you, so we would like to ask you to vote for us by going to this link.
I would like to propose four reasons why I think we deserve these awards.Research, development and sharing
The culture at Rittman Mead has always been to delve into the tools we use, push their boundaries and share what we learn with the community. Internally, we have formalised this by having our own in house R&D department. People like Robin Moffatt, Jordan Meyer and Mark Rittman spend a lot of time and effort looking at the core Oracle data and analytics toolset to determine the optimal way to use it and see which other leading edge tools can be integrated into it.
We have also worked with Oracle to produce a new version of their reference architecture that was the first one to incorporate the new wave of big data technologies and approaches such as Hadoop and a data reservoir.Delivery
One of the main drivers for our R&D department is to make us more effective at delivering data and analytics projects.
We are continually investigating common and new approaches and design patterns found in the world of ETL, data warehousing, business intelligence, analytics, data science, big data and agile project delivery, and combining them with our experience to define optimal ways deliver projects.
Again, we share a lot of these approaches through talks at Oracle and community conferences, blog posts and routines shared on our GitHub repository.Learning and education
Learning is key to everything we do in life, and as such, we see the provision of independent courses for Oracle business intelligence and data integration tools as key for the industry. We have developed our own training materials based on the different roles people play on projects, for example we have a Business Enablement Bootcamp aimed at end users and OBIEE Bootcamp aimed at developers. We know from our feedback forms how effective this training is.
To supplement the training materials we also wrote the official OBIEE Oracle Press book based around the same examples and data sets.Optimisation
Our key role as an Oracle partner and member of the Oracle community is to optimise the value any organisation gets from investing in Oracle data and analytics related software and hardware.
This is something that requires a long term commitment, a high level of investment and a deep level of knowledge and experience, which is hopefully demonstrated above. To this end, we are prepared to often go beyond the level of information that Oracle can offer and in certain cases challenge their own understanding of the tools.
We were the first UK partner to buy an Exalytics server, for example, and have written a whole host of articles around the subject. Similarly we are the proud owner of a BICS pod and we are now evaluating how organisations can effectively use cloud in their strategic business intelligence architectures and then, if they do, the best approach to integrating it.
Finally, we are also investing heavily in user engagement, providing the capability to measure then optimise an organisation’s data and analytics systems. We believe user engagement is directly and measurably linked to the return organisations get from their investment in Oracle data and analytics software and hardware.In Summary
So, in summary, I hope that the reasons that I outline above explain why we deserve some or all of the above awards, as they act as a great way to recognise the effort put in by all our staff over the course of the year. The voting link is ￼here￼.
All seasonal Oracle Apps DBAs know that Autoconfig is the master utility that can configure the whole E-Business Suite Instance. In E-Business Suite releases 11i, 12.0 and 12.1 running Autoconfig recreated all the relevant configurations files used by Apache server. If the context file has the correct settings, then configuration files should include the correct setting after running Autoconfig. This is not the case anymore in Oracle E-Business Suite 12.2. Some of the Apache config files are under fusion middleware control now, namely httpd.conf, admin.conf and ssl.conf. All other Apache config files are still under Autoconfig control. But these 3 critical config files include the main config pieces like Webport, SSL port etc.
So if you have to change the port used by EBS instance, then you have to log into the Weblogic admin console and change port there and then sync context xml file using adSyncContext.pl. This adSyncContext.pl utility will get the current port values from Weblogic console and update the xml with new port values. Once the context xml file syncs, we have to run Autoconfig to sync other config files and database profile values to pickup new webport
Similarly, if you want to change the JVM augments or class path, you have run another utility called adProvisionEBS.pl to make those changes from command line or login to the Weblogic admin console to do those changes. Interestingly, few of the changes done in Weblogic admin console or fusion middleware control are automatically synchronized with context xml file by the adRegisterWLSListeners.pl script that runs in the background all the time. But Apache config file changes were not picked by this script, so Apache changes had to be manually synchronized
There are a few My Oracle Support notes that can help you understand these utilities little more, such as 1676430.1 and 1905593.1. But understand that Autoconfig is a different ball game in Oracle E-Business Suite R12.2.
Discover more about Pythian’s expertise in the world of Oracle.
Yes, Oracle Documents Cloud Service is an enterprise file sync and share (EFSS) solution. Yes, it is in the cloud as you would expect from a modern day EFSS solution. And yes, it is easy and intuitive to use with out-of-the-box mobile accessibility with the added security that you would expect from an Oracle solution. But why do we keep insisting it is more than just an EFSS or file sharing solution? Because beyond just providing cloud storage and 24/7 access and share and sync capabilities and mobile access, Documents Cloud mobilizes your enterprise content. Unlike first-generation EFSS solutions, it doesn't create new information silo's and instead allows you to drive the content connected to your applications, to your business processes or to your source of record (like an enterprise content management system that is on premise) and make it available anywhere, for sharing and for collaboration.
Here's a quick look at how it may make your work life that much easier, more productive, much more efficient and perhaps even, fun?
For more information, visit us at oracle.com/digitalcollaboration.
Amazon has just released their RDS Migration Tool, and Pythian has recently undertaken training to use for our clients. I wanted to share my initial thoughts on the tool, give some background on its internals, and provide a walk-through on the functionality it will be most commonly used for.
There are many factors to consider when evaluating cloud service providers, including cost, performance, and high availability and disaster recovery options. One of the most critical and overlooked elements of any cloud offering though, is the ease of migration. Often, weeks are spent evaluating all of the options only to discover after the choice is made that it will take hours of expensive downtime to complete the migration, and that there is no good rollback option in the case of failure.
In order to reduce the friction inherent in the move to a DBaaS offering, Amazon has developed an RDS Migration tool. This is an in-depth look at this new tool, which will be available after September 1, 2015. Contact Pythian to start a database migration.
With the introduction of the RDS Migration tool, Amazon has provided a powerful engine capable of handling much more than basic migration tasks. It works natively with Oracle, SQL Server, Sybase, MySQL, PostgreSQL, Redshift (target only), Aurora (target only), and provides an ODBC connector for all other source systems. The engine is powerful enough to handle fairly complex transformations and replication topologies; however, it is a migration tool and isn’t intended for long-term use.Architecture
Amazon’s RDS Migration Tool architecture is very simple. It consists of your source system, an AWS VM with the Migration Tool installed on it, and the target RDS instance.
Each migration is broken up into Tasks. Within a Task, a source and target database are defined, along with the ability to transform the data, filter the tables or data being moved, and perform complex transformations.
Tasks can be scheduled to run at particular times, can be paused and resumed, and can alert on success or failure. It’s important to note that if a task is paused while a table is loading, that table will be reloaded completely from the beginning when the task resumes.
Within a running task, the following high-level steps are performed:
• Data is pulled from the source using a single thread per table
• Data is converted into a generic data type
• All transformations are applied
• Data is re-converted into the target system’s datatype and inserted
• After the initial load, if specified, the tool monitors for updates to data and applies them in near real-time
While processing the data, each table has a single thread reading from it, and any updates are captured using the source system’s native change data capture utility. Changes are not applied until after the initial load is completed. This is done to avoid overloading the source system, where it’s assumed client applications will still be running.Performance Considerations
There are several factors which might limit the performance seen when migrating a database.
Probably the biggest contributor to performance issues across data centers, there is no magic button when moving to RDS. If the database is simply too big or too busy for the network to handle the data being sent across, then other options may need to be explored or used in conjunction with this tool.
Some workarounds to consider when network performance is slow include:
• Setup AWS Direct Connect
• Use a bulk-load utility, and then use the tool to catch up on transactions
• Only migrate data from a particular point in time
RDS Migration Tool Server CPU
The migration tool converts all data into a common data type before performing any transformations, then converts them into the target database’s data type. This is obviously very heavy on the server’s CPU, and this is where the main performance bottlenecks on the server are seen.
Capacity of Source database
This tool uses a single SELECT statement to migrate the data, and then returns for any changed data after the initial bulk load is completed. On a busy system, this can be a lot of undo and redo data to migrate, and the source system needs to be watched closely to ensure the log files don’t grow out of control.
Capacity of Target database
In the best case scenario, this will be the limiter as it means all other systems are moving very fast. Amazon does recommend disabling backups for the RDS system while the migration is running to minimize logging.
The following walkthrough looks at the below capabilities of this tool in version 1.2:
• Bulk Data Migration to and from the client’s environment and Amazon RDS
• Near Real-Time Updates to data after the initial load is completed
• The ability to transform data or add auditing information on the fly
• Filtering capabilities at the table or schema level
You will need to have setup network access to your databases for the RDS Migration Tool.
1. After confirming access with your account manager, access the tool by opening the AWS console, selecting EC2, and choosing AMIs.
2. Select the correct AMI and build your new VM. Amazon recommends an M4.large or M4.xlarge.
3. After building the new VM, you will need to install the connectors for your database engine. In this example, we’ll be using Oracle Instant Client 220.127.116.11 and MySQL ODBC Connector 5.2.7.
- For the SQL Server client tools, you will need to stop the Migration services before installing.
4. Access the Migration Tool
- Within VM: http://localhost/AmazonRDSMigrationConsole/
- Public URL: https:[VM-DNS]/AmazonRDSMigrationConsole/
- Username/Password is the Administrator login to the VM
7. Back on the Tasks menu, click the Manage Databases button to add the source and target databases. As mentioned earlier, this walkthrough will be an Oracle to Aurora migration. Aurora targets are a MySQL database for the purposes of this tool.
8. After defining your connections, close the Manage Databases pop-up and select New Task. Here, you can define if the task will perform a bulk-load of your data and/or if it will attempt to apply changes made.
9. After closing the New Task window, simply drag & drop the source and target connectors into the task.
10. By selecting Task Settings, you can now define task level settings such as number of threads, truncate or append data, and define how a restart is handled when the task is paused. You can also override the global error handling and logging settings here.
- The best practice recommendation is to find the largest LOB value in your source database and set that as the max LOB size in the task. Setting this value allows the task to optimize LOB handling, and will give the best performance.
11. Select the Table Selection button to choose which tables will be migrated. The tool uses wildcard searches to allow any combination of tables to exclude or include. For example, you can:
- Include all tables in the database
- Include all tables in a schema or set of schemas
- Exclude individual tables and bring over all remaining tables
- Include individual tables and exclude all remaining tables
The tool has an Expand List button which will display all tables that will be migrated.
12. After defining which tables will be migrated, select an individual table and choose the Table Settings button. Here you can add transformations for the individual tables, add new columns or remove existing ones, and filter the data that is brought over.
13. Select the Global Transformations button. Like the table selection screen, you use wildcards to define which tables these transformations will be applied to.
- Rename the schema
- Rename the table
- Rename columns
- Add new columns
- Drop existing columns
- Change the column data types
14. Finally, save the task and choose Run. This will kick off the migration process and bring up the Monitoring window. From here, you can see the current task’s status, notifications, and errors, as well as get an idea of the remaining time.
A few nights ago a Domino’s Pizza commercial got my attention. It is called “Sarah Loves Emoji.”
At the end, the fictional character Sarah finishes by simply saying “only Domino’s gets me.”
The idea of texting an emoji, tweeting, using a Smart TV, or a smartwatch to automagically order pizza fascinates me. What Domino’s is attempting to do here is to reduce user friction, which is defined as anything that prevents a user from accomplishing a goal. After researching Domino’s Anywhere user accounts, I found a negative post of a frustrated user, of course! Thus proving that even if the system is designed to reduce friction, the human element on the process is bound to fail at some point. Regardless I think is pretty cool that consumer oriented companies are thinking “outside the box.”
As a long fan of building Instant Messaging (xmpp/jabber) and SMS (Twilio) bots, I understand how these technologies can actually increase productivity and reduce user friction. Even single-button devices (think Amazon Dash, or my Staples Easy Button hack) can actually serve some useful purpose.
I believe we will start to see more use cases, where input is no longer tied to a single Web UI or mobile app. Instead we will see how more ubiquitous input process like text, twitter, etc. can be used to start or complete a process. After all it seems like email and text are here to stay for a while, but that’s the content of a different post.
I think we should all strive that our customers will ultimate say that we “get them.”Possibly Related Posts:
- Another Take on Twilio Signal 2015
- Supertrackr Tracks Keywords in Real Time
- Taleo Interview Evaluations, Part 2
- Twilio Signal Conference 2015
- Reducing Friction with Amazon Prime Now
By Phil HillMore Posts (365)
What interesting timing. Just as I published my interview with Martin Dougiamas, I was notified that Totara LMS, a Moodle derivative aimed at the corporate learning market, has forked from Moodle and is changing its relationship with the Moodle Community. From their newsletter released today (Sept 3 Australia time):
The relationship between Totara and Moodle is changing
We have made the carefully considered decision that from 2016 Totara LMS will no longer be in lockstep with Moodle. This will free the team at Totara Learning to focus on big leaps forward in usability and modernising the framework for our enterprise customers.
Further down, Richard Wyles wrote an additional post explaining the fork, starting with his long-term relationship with Moodle. He then explains:
Why are we forking?
From 2016 onwards we will no longer be in lockstep. Totara LMS will progressively diverge from its Moodle foundations.
Why have we made this decision? There are several factors;
- Innovation. A benefit of open source software is the ability to extend the code base of an application and develop it in a new direction. Over the past few years we have added more than 450,000 lines of code comprising a series of modular, interwoven extensions layered on top of a standard Moodle. All the additional features reflect the different needs of our user community and Totara LMS is now almost unrecognisable from a standard Moodle installation. We’ve taken a lot of care to achieve these results with minimal alterations to Moodle’s core codebase. That policy has been beneficial to both projects. However it also comes with constraints, particularly with some feature requests such as multi-tenancy. To do this well requires deep architectural changes. Overall, to continue, and accelerate our rate of innovation we need to start diverging the base platforms.
- Modernising the platform. It is our view, and we know it is a shared view with many Totara Partners, that the current product needs a significant investment in the overall UX. Due to the following point regarding collaboration we are unable to make this investment without diverging from Moodle. We are committed to doing the best by our Totara Partners, key stakeholders in our open source ecosystem, and our growing (collective) customer base. Our 2016 release (which will be tagged as Totara LMS version 9.0) will have a major focus on improving the UX design and overall quality assurance.
Richard goes on with other reasons and concludes:
The decision to forge a new direction is simply based on the need to deliver the best product we’re able – fit for purpose for modern workplace learning, guided by the needs of our partners and customers.
The Totara LMS home page links to a YouTube video introduction, and I note that the lack of reference to “Moodle” name.
Wow. This is a significant move for several reasons, including the following:
- The long-term relationship of Richard and others in Totara to the Moodle Community, which will now diverge;
- The importance of corporate learning for many, if not most, Moodle Partners;
- One of the reasons not quoted above in Richard’s post is that “The leadership of Moodle Pty Ltd has made it clear to us that it is their intent to clone recent Totara LMS versions to offer the market ‘Moodle for Workplace.’” (read Richard’s post in full); and
- Totara has contributed an large amount of code to Moodle, including “with Moodle HQ incorporating Totara developed features; Learning Plans and Competencies”.
I will now extend my core argument from last week’s post on Blackboard’s Moodle strategy in Latin America.
The Moodle community at large appears to be at an inflection point. This inflection point I see comes from a variety of triggers:
- Blackboard acquisitions causing Moodle HQ, other Moodle Partners, and some subset of users’ concerns about commercialization;
- Creation of the Moodle Association as well as Moodle Cloud services as alternate paths to Moodle Partners for revenue and setup;
- Remote-Learner leaving the Moodle Partner program and planning to join the Moodle Association, with its associated lost revenue and public questioning value; and
- Totara LMS forking and diverging from Moodle core.
Analysis post coming soon.
The post Breaking: Totara LMS Forks From Moodle And Changes Relationship appeared first on e-Literate.
By Phil HillMore Posts (364)
In my post last week on Blackboard’s Moodle strategy in Latin America, I made the following observation:
At the same time, this strategy and growth comes at a time where the Moodle community at large appears to be at an inflection point. This inflection point I see comes from a variety of triggers:
- Blackboard acquisitions causing Moodle HQ, other Moodle Partners, and some subset of users’ concerns about commercialization;
- Creation of the Moodle Association as well as Moodle Cloud services as alternate paths to Moodle Partners for revenue and setup; and
- Remote-Learner leaving the Moodle Partner program and planning to join the Moodle Association, with its associated lost revenue and public questioning value.
I’m working on a follow-up post that looks more deeply at these changes to the Moodle community, and as part of the research I’ve interviewed Martin Dougiamas, Moodle Founder and CEO, by email. Given Martin’s role, I wanted to avoid the risk of having his answers get buried within my upcoming analysis post; therefore, I’ve decided to publish the interview in full. The only changes I have made are for clarity: showing and correcting full names instead of acronyms, correcting grammar, and reordering questions to show follow-up discussions in context.
Phil: Given Blackboard’s trend in acquisitions for Moodle (Remote-Learner UK, X-Ray Analytics, Nivel Siete), and assuming these are not the last, how do these moves affect the Moodle community and future (including roadmap, Moodle HQ funding, whatever)? What are the biggest benefits and / or what are the risks and downsides?
Martin: In any community there’s always going to be some concern about any one organisation trying to gain dominance. Our certified Moodle Partner program was designed specifically to avoid these kind of risks by building a large global network of different companies (currently 68 and growing, including Moonami and Elearning Experts recently in the US) who are committed to supporting Moodle HQ. The recent Blackboard acquisitions don’t bring any benefits to Moodle as a whole.
Phil: When you say “the recent Blackboard acquisitions don’t bring any benefits to Moodle as a whole”, I note that in Latin America the only other Moodle Partners are in Argentina (1) and Brazil (3). Would Blackboard / Nivel Siete expansion to service most of Latin America end up generating more official Moodle Partner revenue, thus helping fund more core development through HQ?
Martin: We have South American Moodle Partners in Argentina, Bolivia, Chile, Peru and several in Brazil, as well as Partners who work in South America from other locations. Our Partner program is all about supporting local businesses who are Moodle experts, and they support us by paying royalties.
There is always some talk around acquisitions which it’s good to be mindful of. From a Moodle point of view there’s no new “expansion” – it was already happening.
Nivel Siete, like Moodlerooms, was a tiny company of several people who grew to 20 or so people with our support over many years. Meanwhile, Blackboard has had offices and resellers selling Blackboard Learn in South America for many years. As you know, acquisitions usually happen to remove a competitor or to gain some capabilities that the buying company was not able to develop on their own.
Phil: Do you agree with my characterization that “Moodle community at large appears to be at an inflection point” this year, driven by the three examples listed?
Martin: Sorry, I don’t really agree with your characterization. Unlike nearly all other LMS companies, Moodle is not profit-focussed (all our revenue goes into salaries). We are an organisation that is completely focussed on supplying a true open source alternative for the world without resorting to venture capital and the profit-driven thinking that comes with that.
Of course we still want to grow our core development team significantly in order to help Moodle evolve faster. So some of the big new things you’re seeing from us this year have been in the pipeline for a while and are about driving that: the Moodle Association is a formalisation of crowd-funding for additional new core developments; and MoodleCloud is very much about supporting and strengthening the Moodle Partner brand (while helping those who want these new services).
Regarding our ex-Partner Remote-Learner, it’s a shame we’ve lost them as friends but they are driven by their own internal issues. Saying they have switched to the Association is a little like saying you switched to Kickstarter, it doesn’t mean much. In any case they cannot actually even join the Moodle Association as commercial LMS service providers are not eligible.
Phil: My note on “inflection point” is not based on a profit-driven assumption. The idea is that significant changes are underway that could change the future direction of Moodle. A lot depends on Blackboard’s acquisition strategy (assuming it goes beyond Remote-Learner UK and Nivel Siete), whether other Moodle Partners follow Remote-Learner’s decision, and whether Moodle Association shows signs of producing similar or larger revenues than the Moodle Partner program. What I don’t see happening is extension of the status quo.
Martin: Moodle’s mission is not changing at all, we are just expanding and improving how we do things in response to a shifting edtech world. We are starting the Moodle Association to fill a gap that our users have often expressed to us – they wanted a way to have some more direct input over major changes in core Moodle. There is no overlap between this and the Moodle Partners – in fact we are also doing a great deal to improve and grow the Moodle Partner program and as well as the user experience for those who need Moodle services from them.
Phil: You have previously described the Moodle model as a ‘benevolent dictatorship’. Do you see that core model changing in the near future based on the three items I mentioned under inflection point (Moodle Association, Blackboard acquisitions, Remote-Learner leaving Moodle Partner program) or do you see roughly the same model but just with additional crowd-funding through Moodle Association? I think you’re answering the latter, but I want to make sure.
Martin: Yes, the latter.
I don’t use the ‘benevolent dictatorship’ term myself although it’s common in the open source world. Yes, I wrote everything in the first versions of Moodle, and my company continues to lead the project via Moodle Pty Ltd [aka Moodle HQ].
However, rather than any kind of dictatorship we see our mission as being *servants* to the community of teachers and learners who need Moodle and quality open source Free software. Our core duty is to give away the software we develop. Our values are to support educators with respect, integrity, openness and innovation. See https://moodle.com/hq/ This is never going to change.
This is in contrast to multi-billion companies whose value is in increasing their EBITDA [earnings before interest, taxes, depreciation and amortization] before a sale, and whose mission is to expand by acquiring markets in other countries.
Phil: Could you comment on the deep penetration of Moodle worldwide into corporate learning (maybe equal to higher ed / K-12)?
Martin: Yes, Moodle is used a lot in corporate learning worldwide. In fact something like 40% of the many thousands of clients using Moodle Partners as service providers are using Moodle for company training, including some really huge ones. We have a few case studies on our website at moodle.com/stories if you’re interested.
- Changing references to “Remote Learner” to follow the proper “Remote-Learner” usage
- For example, replacing “BB” with “Blackboard”, “NS” with “Nivel Siete”, etc
The post Interview With Martin Dougiamas On Changes To Moodle Community This Year appeared first on e-Literate.
A Guest Post by Jeri Kelley (avatar to the left), Senior Principal Product Manager, Oracle
There are a lot of great reasons for Oracle Commerce customers to attend OpenWorld at the end of October, including in-depth product updates, many customer success stories, hands-on labs, and networking events. Attendees will walk away with a better understanding of how Oracle’s commerce solutions can help them stay competitive in today’s rapidly changing commerce market.
What’s New and Different?
- Meet Oracle Commerce Cloud―it's the newest addition to Oracle’s CX Applications portfolio. See demos, learn about the roadmap, and hear directly from our first customers leveraging this new product
- Check out the Hands-on Labs: See how you can quickly stand up an online storefront with Oracle Commerce Cloud
- Catch the Interactive Customer Showcases in the CX Commerce Demo Zone, featuring Oracle Commerce and Commerce Cloud customers
All sessions and the demo zone for customer experience will be located on 2nd floor of Moscone West in San Francisco.
Commerce attendees can explore best practices and share knowledge with more than 20 commerce-focused sessions:
- Learn about roadmap and release updates
- Get an in-depth look at Oracle Commerce Cloud
- Attend thought-leadership sessions featuring Oracle strategy experts and industry analysts
- Sit in on customer panels featuring both Oracle Commerce and Commerce Cloud customers
- Experience manager and business control center best practice sessions
- Listen to customer and partner case studies
- Take part in more than just commerce-focused sessions and explore all that CX Central @ OpenWorld has to offer
Sessions of Special Interest
- The Future of Oracle Commerce: Roadmap and Release Update (CON6303), Tuesday Oct. 27, 5:15-6:00 p.m., Moscone West, Room 2005
- Meet Oracle Commerce Cloud―A New SaaS Solution for Commerce (CON8647), Wednesday, Oct. 28, 12:15-1:00 p.m., Moscone West Room 2005
- Accelerating Success with Oracle Commerce―Panel discussion with KLX Aerospace, Tilly’s, and other Oracle Commerce Customers (CON8641), Tuesday, Oct. 27, 4:00-4:45 p.m., Moscone West, Room 2005
- Building Commerce Experiences In The Cloud―Panel discussion with Rock/Creek, Hollander, and Elaine Turner (CON8842), Wednesday, Oct. 28, 3-3:45 p.m., Moscone West Room 2005
Guest Customer and Partner Appearances Include:
Vitamix, American Greetings, Maritz Reward Solutions, KLX Aerospace, Tilly’s, Ulta Rock/Creek, Hollander, Elaine Turner, JC Penney, Furniture Row, TOMS, Bodybuilding.com, Lojos Renner, Verizon, Razorfish, Compasso, SapientNitro, Cirrus10, and more!
Commerce Demo Zone
Take a break in the CX-Commerce Demo Zone. You’ll see the latest Oracle Commerce product demonstrations led by members of the Oracle Commerce product management and sales consulting teams. Take note of the latest features and learn from our customers at these demonstrations:
- Oracle Commerce On-Premise: See the latest features for both B2C and B2B commerce
- Oracle Commerce Cloud: Learn all about our newest offering
- Interactive Customer Showcase: Stop by and visit Oracle Commerce and Commerce Cloud customers as they showcase their latest product offerings. You also can see how they are using Oracle Commerce or Commerce Cloud to power their online shopping experiences.
- Note: These customers will be offering special OpenWorld-only discounts on their products, so make sure to stop by! Featured customers include Vitamix, Rock/Creek, Elaine Turner, and Hollander.
Finally, a preview of Oracle Commerce at OpenWorld would not be complete without a mention of customer appreciation events:
- Monday, October 26: Commerce Customer Dinner @ The Waterbar Restaurant; by invitation only and your chance to network with Oracle Commerce product management and your commerce peers.
- Tuesday, October 27: CX customer appreciation event; planning is in progress!
- Wednesday, October 28: Oracle Appreciation Event at Treasure Island!
At a Glance
Visit Commerce—CX Central @ OpenWorld for full details on speakers, conference sessions, exhibits and entertainment!
We look forward to seeing everyone in San Francisco, October 25–October 29, 2015!
In that article I suggested to create a set of properties with the actual working values on project level to be refered in the endpoint urls (hostname:port + URI), username/passwords and so on. And per environment a distinct set of the same properties holding the values of that target environment. Then I created a test case per target environment that copies those target-environment values to the working properties.
This works fine for me. However, in my quest to the most comfortable way of registering and toggling between those values, I found a few enhancements quite convenient.
First of all, after the property-transfer step, I created a manual step, listing the values of the working properties, with a remark that it should contain the values of the particular target environment:
My Localhost version of this testcase will run into:
Don't forget to click 'Ok' after this step. Unfortunately I did not find a way to increase the 'Expected Result" Textarea.
The second enhancement was that I moved my Target Environment property values to the "setEnvironment" test case. I found that with an increasing number of properties it is quite hard to have a clear overview of the properties. And I need to think about a proper naming convention. But when I moved those to the "setEnvironment" test case I have a distinctive set of properties per environment and on project level a distinctive set of working properties.
Since the testcase is environment-specific, I don't need a target-environment reference in the property-names. And also, they are scoped and thus not referable within other testsuites/testcases, preventing errors.
Moving those properties in SoapUI is pretty labor-intensive. But moving those in the source is quick (and dirty). If you open the SoapUI project xml file, you'll find the project-properties at the bottom of the file:
Copy and paste the properties to move (since I did so allready you'll don't find my target-env properties here anymore) to a seperate file to get them together.
Then find the target-testcase in the file (in this example "SetLocalEnvironment"):
You'll find at the end of the testcase an empty properties element (<con:properties/>). Open it up (<con:properties> </con:properties> ) and copy and paste the properties within the element.
SoapUI doesn't do formatting of the xml, so you might want to add line-feeds like I did in this example.
Finally you need to change the property-transfer-lines. In the example above, you'll see that I found the transfer-step and added line-feeds for each "<con:transfers ..."
<con:transfers setNullOnMissingSource="true"... some other properties... ><con:name>CSServiceHost</con:name><con:sourceType>CSServiceHost-Dev</con:sourceType><con:sourceStep>#Project#</con:sourceStep><con:targetType>CSServiceHost</con:targetType><con:targetStep>#Project#</con:targetStep><con:upgraded>true</con:upgraded></con:transfers>
You'll find in this line the source step: "<con:sourceStep>#Project#</con:sourceStep>" and target step: "<con:targetStep>#Project#</con:targetStep>". Change the value of the source step to: "<con:sourceStep>#TestCase#</con:sourceStep>". Now the property is refered from the testcase instead of the project. The property on project level can be deleted.
Renaming the properties can better be done from the UI, since SoapUI will then change the property-name also in every reference. Even in soap-requests. So after changing the file in your ascii-editor reload the project in SoapUI. And make the final changes and do the tests.
Now, although this is a nice example of source-hacking from outside the UI, you can't expect any support on this of course. So enable a proper version-control and test the SoapUI properly.
Last night was Oracle Midlands event #11 with Chris Antognini.
The lead up to this event was not the best for me. I had been on the verge of a headache all day. By 14:00 I gave up, went home and went to sleep for a couple of hours. It wasn’t great, but it was just enough to take the edge off, so when the time came, I felt sort-of OK to head out for the event. The drive started to convince me this wasn’t the best move, but once I got to the event and sat down I figured I was going to make it.
Chris did two talks at the event.
- Designing for Performance: Database Related Worst Practices
- Identification of Performance Problems Without the Diagnostics Pack
The first talk had lots of people’s heads nodding. It’s kind-of depressing, but we’ve all seen, and continue to see, these same things happening again and again. I, like others in the audience, am convinced it is because of the lack of emphasis on database technologies in development. Too many frameworks encourage a hands-off approach to the database, hiding it behind persistence layers that end up doing a mediocre job, at best. Anyway, enough of my rambling. This session should be mandatory viewing once a month for every developer that goes near a database!
The second session was pretty neat too. I must admit I’ve become addicted to the Enterprise Manager 12c performance pages, so a couple of the things Chris mentioned took me by surprise, including the use of some V$ views that I assumed were part of the Diagnostics and Tuning Pack, but aren’t. I’m purposely going to avoid mentioning them here because I would want to confirm the status before accidentally leading someone astray, but the idea was, query the V$ view and you are good. Query the DBA_HIST_* view and you’ve sourced the information from the AWR, so you need the D&T pack. This definitely *does not* apply to all V$ views, but it’s worth checking out if you don’t have D&T, or you are working with standard edition.
I think the evening went really well. Thanks to Chris for coming to speak to us and thanks to the Oracle ACE Program for getting him across. Thanks to Red Stack Tech for sponsoring the event, allowing this to remain free. Thanks to Mike for doing a great job of keeping these events rolling. Of course, thanks to everyone for turning up after the Bank Holiday weekend.
Tim…Oracle Midlands : Event #11 – Summary was first posted on September 2, 2015 at 10:14 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
exec dbms_scheduler.create_job(job_name=>null,job_type=>'PLSQL_BLOCK',job_action=>'BEGIN NULL; END;') ORA-27451: JOB_NAME cannot be NULL ORA-06512: at "SYS.DBMS_ISCHED", line 146 ORA-06512: at "SYS.DBMS_SCHEDULER", line 288 ORA-06512: at line 1
This sounds like a proper error message. A bit less obvious is the drop_job message
SQL> exec dbms_scheduler.drop_job(job_name=>null) ORA-20001: comma-separated list invalid near ORA-06512: at "SYS.DBMS_UTILITY", line 236 ORA-06512: at "SYS.DBMS_UTILITY", line 272 ORA-06512: at "SYS.DBMS_SCHEDULER", line 743 ORA-06512: at line 1
comma-separated list invalid near what?
Ok, why would you create an empty job? Obviously you wouldn’t. But remember job_name could be a very long expression that won’t fit in your VARCHAR2(30) variable.
SQL> begin dbms_scheduler.create_job(job_name=> ' "SCOTT" '|| ' . '|| ' "JOB10000000000000000000001" ', job_type=>'PLSQL_BLOCK', job_action=>'BEGIN NULL; END;'); end; / PL/SQL procedure successfully completed. SQL> exec dbms_scheduler.drop_job('scott.job10000000000000000000001') PL/SQL procedure successfully completed.
If you use drop job in the exception clause without catching the exception of the exception, it could lead to this ORA-20001 if job name is null
For exception handling, we could improve
BEGIN CREATE JOB RUN JOB DROP JOB EXCEPTION WHEN OTHERS THEN DROP JOB output message RAISE END
BEGIN CREATE JOB RUN JOB DROP JOB EXCEPTION WHEN OTHERS THEN BEGIN DROP JOB EXCEPTION WHEN IS_RUNNING sleep WHEN OTHERS output message END LOOP output message RAISE END
Here’s a simple data set – I’m only interested in three of the columns in the work that follows, but it’s a data set that I use for a number of different models:
execute dbms_random.seed(0) create table t1 nologging as with generator as ( select --+ materialize rownum id from dual connect by level <= 1e4 ) select trunc(dbms_random.value(0,1000)) n_1000, trunc(dbms_random.value(0,750)) n_750, trunc(dbms_random.value(0,600)) n_600, trunc(dbms_random.value(0,400)) n_400, trunc(dbms_random.value(0,90)) n_90, trunc(dbms_random.value(0,72)) n_72, trunc(dbms_random.value(0,40)) n_40, trunc(dbms_random.value(0,3)) n_3 from generator v1, generator v2 where rownum <= 1e6 ;
create table t2 nologging as select * from t1 ; begin dbms_stats.gather_table_stats( ownname => user, tabname =>'T1', method_opt => 'for all columns size 1' ); dbms_stats.gather_table_stats( ownname => user, tabname =>'T2', method_opt => 'for all columns size 1' ); end; /
The columns I want to consider are n_3, n_400, and n_1000. As their names suggest the columns have 3, 400, and 1000 distinct values respectively and since I’ve used the dbms_random.value() function to generate the data the distinct values are fairly evenly spread across the million rows of the table.
Consider, then, the following two queries:
select * from t1 where exists ( select null from t2 where n_1000 = 0 and t2.n_400 = t1.n_400 and t2.n_3 = t1.n_3 ) ; select * from t1 where (t1.n_400, t1.n_3) in ( select t2.n_400, t2.n_3 from t2 where t2.n_1000 = 0 ) ;
The first point to check is that these two queries are logically equivalent.
Once you’re happy with that idea we can work out, informally, how many rows we should expect the queries ought to return: there are 1,200 combinations for (n_400, n_3) so each combination should return roughly 833 rows; if we pick 1,000 rows from the 1 million available we can expect to see 679 of those combinations (that’s Alberto Dell’Era’s “selection without replacement” formula that Oracle uses for adjusting num_distinct to allow for filter predicates). So we might reasonably suggest that the final number of rows as 833 * 679 = 565,607. It turns out that that’s a pretty good estimate – when I ran the query the result was actually 567,018 rows.
So what does Oracle produce for the two execution plans – here are the result from 12c (EXISTS first, then IN):
=================== Multi-column EXISTS =================== ----------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 920K| 34M| 1259 (11)| 00:00:01 | |* 1 | HASH JOIN RIGHT SEMI| | 920K| 34M| 1259 (11)| 00:00:01 | |* 2 | TABLE ACCESS FULL | T2 | 1000 | 11000 | 610 (8)| 00:00:01 | | 3 | TABLE ACCESS FULL | T1 | 1000K| 26M| 628 (11)| 00:00:01 | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("T2"."N_400"="T1"."N_400" AND "T2"."N_3"="T1"."N_3") 2 - filter("N_1000"=0) =================== Equivalent IN query =================== ----------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 833K| 30M| 1259 (11)| 00:00:01 | |* 1 | HASH JOIN RIGHT SEMI| | 833K| 30M| 1259 (11)| 00:00:01 | |* 2 | TABLE ACCESS FULL | T2 | 1000 | 11000 | 610 (8)| 00:00:01 | | 3 | TABLE ACCESS FULL | T1 | 1000K| 26M| 628 (11)| 00:00:01 | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("T1"."N_400"="T2"."N_400" AND "T1"."N_3"="T2"."N_3") 2 - filter("T2"."N_1000"=0)
The first thing to note is that the shape of the plans is identical, and the predicate sections are identical – but the final cardinalities are different. Clearly at least one of the cardinalities has to be wrong by a significant amount (7.5% or 10.4%, depending which way round you want to look at it). If you run the test on 18.104.22.168 you find that both plans give the same estimated row count – and it’s the 920,000 rows; so arguably 12c has “fixed” the IN subquery calculation, bringing it closer to a reasonable prediction, but it hasn’t fixed the EXISTS subquery calculation. That 833K prediction, by the way, is what you would expect to see with this data with a basic join – and a semi-join shouldn’t be able to produce more data than a join.
But both predictions are way off the (informal) expectation, so how have they appeared ?
Working backwards it’s easy to spot that: 833K = 833 * 1,000: Oracle is behaving as if every single row identified in the subquery will produce a separate combination of (n_400, n_3). If we reverse engineer 920K we get: 920K / 833 = 1104 – it would appear that the optimizer thinks the 1,000 rows produced by the subquery will produce 1,104 distinct combinations of (n_400, n_3) so we how did the impossible 1,104 appear in the arithmetic.
If you apply the “selection without replacement” formula to picking 1,000 rows with 400 distinct values from 1,000,000 rows the expected number of distinct values (with rounding) will be 368; if you apply the formula for picking 1,000 rows with 3 distinct values from 1,000,000 rows the expected number will be 3. And 3 * 368 = 1,104. (Remember that in my original estimate I applied the formula after multiplying out the combination of distinct values). The optimizer is using its standard methods, but using internediate results in an unsuitable fashion.
It’s impossible to say what the impact of this particular code path – and the change on the upgrade – might be. The optimizer has over-estimated by 47% in one case and 62% in the other but (a) there may be something about my data that exaggerated an effect that few people will see in the wild and (b) in many cases getting in the right ballpark is enough to get a reasonable plan, and a factor of 2 is the right ballpark.
Of course, a few people will be unlucky with a few queries on the upgrade where the estimate changes – after all a single row difference in the estimate can cause the optimizer to flip between a hash join and a nested loop – but at least you’ve got a little extra information that might help when you see a bad estimate on an important semi-join.
So is there a workaround ? Given that I’ve got 12c, the obvious thing to try is to create a column group at both ends of the semi-join and see what happens. It shouldn’t really make any difference because column groups are targeted at the problems of correlated column – but we might as well try it:
execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for columns (n_400,n_3) size 1') execute dbms_stats.gather_table_stats(user,'t2',method_opt=>'for columns (n_400,n_3) size 1')
Unfortunately when I did this the final cardinality estimate for both queries dropped to just 833 (the absence of a K on the end isn’t a typo!).
Manually unnesting got me closer:
select * from ( select distinct n_3, n_400 from t2 where n_1000 = 0 ) sq, t1 where sq.n_400 = t1.n_400 and sq.n_3 = t1.n_3 ; ----------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 649K| 33M| 1260 (11)| 00:00:01 | |* 1 | HASH JOIN | | 649K| 33M| 1260 (11)| 00:00:01 | | 2 | VIEW | | 779 | 20254 | 612 (8)| 00:00:01 | | 3 | HASH UNIQUE | | 779 | 8569 | 612 (8)| 00:00:01 | |* 4 | TABLE ACCESS FULL| T2 | 1000 | 11000 | 610 (8)| 00:00:01 | | 5 | TABLE ACCESS FULL | T1 | 1000K| 26M| 628 (11)| 00:00:01 | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("SQ"."N_400"="T1"."N_400" AND "SQ"."N_3"="T1"."N_3") 4 - filter("N_1000"=0)
The cardinality of 649K is (allowing for rounding) 833 * 779; so we need to know where the 779 came from. It’s the optimizer standard arithmetic for “distinct” – multiply the N individual selectivities together then divide by the sqrt(2) “N-1” times. So we apply the “selection without replacement formula twice”:
- adjusted selectivity of n_400 = 367.21
- adjusted selectivity of n_3 = 3
- 367.21 * 3 / sqrt(2) = 779
If you create column group statistics for (n_400, n_3) this doesn’t change the optimizer’s estimate for the number of distinct combinations after selection – maybe that’s another enhancement in the pipeline – but, at least in this case, the manual unnesting has got us a little closer to the right estimates without any statistical intervention.Footnote:
Just for the sake of completeness, here are the plans (with yet more cardinality predictions) that you get if you block the unnesting:
select * from t1 where exists ( select /*+ no_unnest */ null from t2 where n_1000 = 0 and t2.n_400 = t1.n_400 and t2.n_3 = t1.n_3 ) ; --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1179 | 33012 | 766K (12)| 00:00:30 | |* 1 | FILTER | | | | | | | 2 | TABLE ACCESS FULL| T1 | 1000K| 26M| 632 (11)| 00:00:01 | |* 3 | TABLE ACCESS FULL| T2 | 1 | 11 | 638 (12)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T2" "T2" WHERE "N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2)) 3 - filter("N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2) ===================================== Unnesting blocked and subquery pushed ===================================== select * from t1 where exists ( select /*+ no_unnest push_subq */ null from t2 where n_1000 = 0 and t2.n_400 = t1.n_400 and t2.n_3 = t1.n_3 ) ; --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 50000 | 1367K| 1271 (12)| 00:00:01 | |* 1 | TABLE ACCESS FULL | T1 | 50000 | 1367K| 632 (11)| 00:00:01 | |* 2 | TABLE ACCESS FULL| T2 | 1 | 11 | 638 (12)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter( EXISTS (SELECT /*+ PUSH_SUBQ NO_UNNEST */ 0 FROM "T2" "T2" WHERE "N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2)) 2 - filter("N_1000"=0 AND "T2"."N_400"=:B1 AND "T2"."N_3"=:B2)
The 1179 comes from the magic of sqrt(2): 1179 = 1,000,000 / (400 * 3 / sqrt(2)).
The 50,000 is just the basic “I dunno, let’s call it 5%”.
Reference script: aggregate_selectivity_c.sql
Many corporate cultures have what I would call loose rules. Most people do not like confrontation - in fact, many people will do anything to avoid it. Which means they will not tell you when you're in the gray area of the corporate culture - but believe me, they take note.
Loose rules include dress code to bringing dogs to work and many others. Does dress code matter for the job you're doing? What if you're in sales and NEVER leave the office? What if you RARELY leave the office, but sometimes you do? What if the CEO stops by and says "hey, can you run to this meeting with me (or for me) tonight. Then they look at how you're dressed and say "maybe next time?" Yes, you're within the "rules" but it WILL affect your career - and not usually in a positive way.
Bring your dog to work. Where's the line? What if someone gets a puppy, which they bring to work every day? Yes, puppies are cute, REALLY cute. What if you're the person that brought the puppy in every day? You spend an hour taking the puppy for a walk, taking it outside (while it has you trained rather than visa versa). Or worse yet, every time your boss walks by, someone is distracted by petting your puppy? They cute! Too cute to pass! You're negatively affecting productivity. Again, this is not likely to work well for your career.
Many companies have gone to unlimited vacation. What if 2 weeks after you start, you take a 2 month vacation? According to policy that's OK, but I wouldn't expect you to have a job when you return. Where's the line? What's gray? What's acceptable?
Are you within the rules in each of the above examples? Absolutely, but loose rules have unspoken rules. Rules nobody will actually admit to quite often. Startups go through a number of stages in a very short period of time. People are evaluated regularly based on the current company stage. Some people survive from one stage to the next and others do not. Those who are not performing or viewed as not performing (e.g. taking care of their puppy all day) do not.
Don't let the loose rules sink your career!
Here I show how I integrated Telstra Public WIFI Api into IBM Bluemix. This Api from Telstra is documented as follows. You need to register on http://t.dev to get the credentials to use thier API which I have previously done which then enables me to integrate it onto Bluemix
Once again here is the Api Within the Bluemix Catalog, these screen shots show the Api has been added to the Bluemix Catalog which can then be consumed as a service.
Finally here is a Web based application using Bootstrap so it renders quite well on mobile devices as well which allows you to enter your Latitidue, Longitude and Radius to find Telstra WIFI Hotspots using the Telstra WIFI Api on IBM Bluemix
Visit http://bluemix.net to get startedhttp://feeds.feedburner.com/TheBlasFromPas
This article is nice companion piece to our strategic approach to emerging technologies and how we apply the “Glance, Scan, Commit” design philosophy to our work.
I’m honored to be quoted in the article and proud to see our little team getting this level of recognition.
If you want to learn more about the R&D projects mentioned, you’re in luck. You can read about the Glance framework and approach and see a quick video of it in action on several smartwatches, including the Apple Watch.
If you want to see some of these emerging technologies projects in person, register to visit the OAUX Cloud Exchange at Oracle OpenWorld 2015, or come tour the new Cloud UX Lab at Oracle HQ.Possibly Related Posts:
- Emerging Technologies and the ‘Glance, Scan, Commit’ Design Philosophy
- Lots of OAUX Updates
- Get a Look at the Future Oracle Cloud User Experience at Oracle OpenWorld 2015
- Are We Ready for the Apple Watch?
- Another Home for Our Stuff
A Guest Post by Joachim Skura, (pictured left), member of Oracle’s HCM Sales
For many years, proactive in-house recruitment efforts have been looked down upon in recruitment circles. The internal recruitment role within companies has been partially eclipsed by the work of dedicated recruitment consultants and agencies, exacerbated by the fact that in-house HR teams have enough to do without having to seek out talent.
As a result, responsibility for recruiting new employees has for many years been split between internal recruitment, advertising, and third-party recruitment agencies.
The problem with advertising is that it is something of a blunt instrument. Organizations will quite often not get enough applications from the high-quality talent they want and it can be a time-sapping experience going through piles of speculative applications. It is a process that has been likened to finding a needle in a haystack.
Using external recruitment agencies, meanwhile, is a huge cost that many businesses can ill afford. The effectiveness of such agencies is declining as is their ability to deliver volume in today’s competitive business environment.
It, therefore, seems clear that the time is right for companies to take another look at how they recruit. So how can they do this? The answer is simple: take the concept of the proactive recruitment desk, dust it off, and bring it into the 21st century.
The first step in this process is to take responsibility for recruitment away from HR. This may seem counterintuitive to many, but for me it is absolutely critical. Recruitment is a very different animal to core HR, combining as it does the disciplines of sales and assessment. To be effective, recruiters must have an excellent understanding of the business―in fact they must be every bit as much business professionals as HR professionals.
Of course, recruitment can only be split from HR with the complete buy-in of senior leadership. In part, this involves a recognition that getting the best people into the business can and does positively affect the bottom line; recruitment must, in short, be seen as mission-critical. The fact is that far too few businesses can currently claim this is the case.
The next step is to ensure the proactive recruitment desk has itself been staffed correctly. As I have said, recruitment requires people with two very distinct skills: the ability to sell a company and the ability to assess an individual. It is a mix of competencies most usually found in head-hunting agencies, and businesses should look to hire people who have come from that background. The result is a team of dedicated in-house recruiters trusted with a single responsibility: to use their industry expertise and extensive social networks to identify the best possible candidates for the business and activate them.
There are two other important features to be considered in a modern proactive recruitment desk. Firstly, the recruitment desk should play no part in the interview process. With a sound understanding of their industry, the recruiter’s focus should be squarely on pre-selection, as good pre-selection is the best possible preparation for line managers and senior managers to then assess the candidates’ fitness. For me, a good pre-selection will ensure that not only do the best candidates get through, but also that managers are armed with the right questions to ask during interviews.
Secondly, the proactive recruitment desk must maintain its competitive edge. Recruitment is a results-based process and teams will perform most effectively if they compete against third-party recruiters. Similarly, the pay structure for recruiters should model that of sales organizations, with a good 60/40 split between core pay and performance-related bonuses. This will encourage proactivity and service excellence from the recruitment desk.
In a former blog on the subject of recruitment, I discussed the need for modern electronic word-of-mouth marketing and the importance of building strong referral and candidate pools, supported by social capabilities in an integrated applicant tracking system. This is a vital component which should be built in to this new model for recruiting. With such an approach, we are no longer searching for a needle in a haystack but are, in effect, creating a haystack of needles.
That this new approach to recruiting is effective cannot be doubted. In fact we have adopted the model at Oracle for major recruitment drives in EMEA and APAC. The results are astonishing. Supported by a successful public relations effort in these regions to let people know we were recruiting, we managed to rapidly fill thousands of positions with just the right people. In fact, so successful was the approach that we now do almost all of our recruiting in-house.
For more information, go to Modern HR in the Cloud.
And if you would like to hear more about this transformative approach to recruiting, then please come to the Zukunft Personal in Cologne on September 15, 2015. Vice President Recruiting EMEA and CEE, Anne-Marie O’Donnell, will be giving a talk on how Oracle went about overhauling its approach to recruiting and the huge benefits we are enjoying as a result.
Here’s a thought that came to me while I was writing up a note about identifying redundant indexes a few minutes ago. Sometimes you end up supporting applications with unexpected duplication of data and indexes and need to find ways to reduce overheads. Here’s some code modelling a scenario that I’ve seen more often than I like (actually, just once would be more often than I’d like):
create table t1 nologging as with generator as ( select --+ materialize rownum id from dual connect by level <= 1e5 ) select rownum id, trunc(sysdate,'MM') + (rownum-1)/1440 date_time, trunc(sysdate,'MM') + trunc((rownum-1)/1440) date_only, rpad('x',100) padding from generator v1, generator v2 where rownum <= 1e5 ; begin dbms_stats.gather_table_stats( ownname => user, tabname =>'T1', method_opt => 'for all columns size 1' ); end; /
I’ve got a table holding one row per minute since the start of the month; there’s a column which holds the date and time accurate to the minute, and another column which is supposed to hold just the date part. Is it possible to create a single index that allows Oracle to handles queries relatively efficiently whether they refer to date_time or date_only ? As a starting step could we get an index range scan on the same index for both of the following queries:
select max(id) from t1 where date_only between sysdate-1 and sysdate ; select max(id) from t1 where date_time between sysdate-1 and sysdate ;
As Bob the Builder likes to say: “yes we can”.
There are a few lines of SQL between the table creation and the stats gathering that I didn’t show you. The first creates the constraint that describes the relationship between date_time and date_only – one is the truncated version of the other; the second defines the index we need, and the third (unfortunately) has to be there to declare the date_time column as a mandatory column:
alter table t1 add constraint t1_trunc_date check( date_only = trunc(date_time) and ( (date_only is null and date_time is null) or (date_only is not null and date_time is not null) ) ) ; create index t1_i1 on t1(trunc(date_time)) nologging; alter table t1 modify (date_time not null);
(Given the requirement for date_time to be not null to get my indexing strategy to work, we could simplify the t1_trunc_date constraint to just (date_only = trunc(date_time)) if we declared date_only to be not null as well).
With the extra lines of SQL included here are the resulting execution plans for the two queries (running on 22.214.171.124, but you get the same plans on 126.96.36.199):
======================================= date_only between sysdate-1 and sysdate ======================================= --------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 21 | 92 (2)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 21 | | | |* 2 | FILTER | | | | | | |* 3 | TABLE ACCESS BY INDEX ROWID| T1 | 4306 | 90426 | 92 (2)| 00:00:01 | |* 4 | INDEX RANGE SCAN | T1_I1 | 4306 | | 13 (0)| 00:00:01 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter(SYSDATE@!>=SYSDATE@!-1) 3 - filter("DATE_ONLY"<=SYSDATE@! AND "DATE_ONLY">=SYSDATE@!-1) 4 - access(TRUNC(INTERNAL_FUNCTION("DATE_TIME"))>=SYSDATE@!-1 AND TRUNC(INTERNAL_FUNCTION("DATE_TIME"))<=SYSDATE@!)
======================================= date_time between sysdate-1 and sysdate ======================================= --------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 21 | 92 (2)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 21 | | | |* 2 | FILTER | | | | | | |* 3 | TABLE ACCESS BY INDEX ROWID| T1 | 1442 | 30282 | 92 (2)| 00:00:01 | |* 4 | INDEX RANGE SCAN | T1_I1 | 4306 | | 13 (0)| 00:00:01 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter(SYSDATE@!>=SYSDATE@!-1) 3 - filter("DATE_TIME"=SYSDATE@!-1) 4 - access(TRUNC(INTERNAL_FUNCTION("DATE_TIME"))>=TRUNC(SYSDATE@!-1) AND TRUNC(INTERNAL_FUNCTION("DATE_TIME"))>=TRUNC(SYSDATE@!))
The optimizer has managed to generate extra predicates in both cases by applying transitive closure to the critical constraint to produce queries that can be addressed (with some inefficiencies) through the single index.
Within limits, therefore, I can reduce two indexes to a single index. The strategy isn’t ideal but it may be appropriate in a few special cases. There are several problems that should be considered carefully:
- The date_time column has to be declared not null for this optimization strategy to appear – that’s going to limit its applicability.
- You may have more complex code where the transformation simply can’t be made to appear.
- The introduction of the trunc() function may change the optimizer’s arithmetic in ways that cause plans to change for the worse
- (Most important) The index range scan is always a multiple of 24 hours, with the excess data discarded after you reach the table. If you have lots of time-based queries for short time intervals (e.g. less than 8 hours) then the extra work done may outweigh the benefit of reducing the number of indexes – especially if all the excess table visits turn into randomly scattered single block reads.
Despite these drawbacks you may decide that you have a case where the strategy is “good enough” to help you reduce the workload on your system at some critical times during the day or night.